Test Report: Docker_Windows 18222

                    
                      364dec8bbfa467ece5e4dc002f47e6311a48ec7e:2024-02-26:33307
                    
                

Test fail (12/327)

x
+
TestErrorSpam/setup (69.21s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-614200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-614200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 --driver=docker: (1m9.2136403s)
error_spam_test.go:96: unexpected stderr: "W0226 10:36:11.933506    8232 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-614200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
- KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
- MINIKUBE_LOCATION=18222
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node nospam-614200 in cluster nospam-614200
* Pulling base image v0.0.42-1708008208-17936 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-614200" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0226 10:36:11.933506    8232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (69.21s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (6.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-366900
helpers_test.go:235: (dbg) docker inspect functional-366900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71",
	        "Created": "2024-02-26T10:38:31.614949225Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 23467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T10:38:32.252101783Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71/hosts",
	        "LogPath": "/var/lib/docker/containers/a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71/a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71-json.log",
	        "Name": "/functional-366900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-366900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-366900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7fda5a5145c506c32fc230333fae31a837b889acbfe6eedee91220b41e6e7c39-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7fda5a5145c506c32fc230333fae31a837b889acbfe6eedee91220b41e6e7c39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7fda5a5145c506c32fc230333fae31a837b889acbfe6eedee91220b41e6e7c39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7fda5a5145c506c32fc230333fae31a837b889acbfe6eedee91220b41e6e7c39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-366900",
	                "Source": "/var/lib/docker/volumes/functional-366900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-366900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-366900",
	                "name.minikube.sigs.k8s.io": "functional-366900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d9cf1abed73770f068beefc97da66d2767b1850c927995846f406aad27ef1d82",
	            "SandboxKey": "/var/run/docker/netns/d9cf1abed737",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51485"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51486"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51488"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51489"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-366900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a8ef7f322885",
	                        "functional-366900"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "323607eae3e318d90ca5811f17ce94e0b1279702245621c5b55af156f5ea81b5",
	                    "EndpointID": "6beb7e048e04693aabab6bcf158af6358250e4fe55ce4f2e616bee781c7f4b9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-366900",
	                        "a8ef7f322885"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-366900 -n functional-366900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-366900 -n functional-366900: (1.3252271s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 logs -n 25: (2.5148736s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-614200 --log_dir                                     | nospam-614200     | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-614200 --log_dir                                     | nospam-614200     | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-614200 --log_dir                                     | nospam-614200     | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-614200 --log_dir                                     | nospam-614200     | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-614200 --log_dir                                     | nospam-614200     | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-614200 --log_dir                                     | nospam-614200     | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-614200 --log_dir                                     | nospam-614200     | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:37 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-614200                                            | nospam-614200     | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:37 UTC |
	| start   | -p functional-366900                                        | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:37 UTC | 26 Feb 24 10:39 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-366900                                        | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:39 UTC | 26 Feb 24 10:40 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-366900 cache add                                 | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-366900 cache add                                 | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-366900 cache add                                 | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-366900 cache add                                 | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | minikube-local-cache-test:functional-366900                 |                   |                   |         |                     |                     |
	| cache   | functional-366900 cache delete                              | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | minikube-local-cache-test:functional-366900                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	| ssh     | functional-366900 ssh sudo                                  | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-366900                                           | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-366900 ssh                                       | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-366900 cache reload                              | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	| ssh     | functional-366900 ssh                                       | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-366900 kubectl --                                | functional-366900 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:40 UTC | 26 Feb 24 10:40 UTC |
	|         | --context functional-366900                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 10:39:36
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 10:39:36.679966    3116 out.go:291] Setting OutFile to fd 760 ...
	I0226 10:39:36.679966    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:39:36.681009    3116 out.go:304] Setting ErrFile to fd 476...
	I0226 10:39:36.681009    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:39:36.700292    3116 out.go:298] Setting JSON to false
	I0226 10:39:36.702969    3116 start.go:129] hostinfo: {"hostname":"minikube7","uptime":1253,"bootTime":1708942723,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 10:39:36.702969    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 10:39:36.707985    3116 out.go:177] * [functional-366900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 10:39:36.711998    3116 notify.go:220] Checking for updates...
	I0226 10:39:36.716665    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:39:36.719057    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 10:39:36.722019    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 10:39:36.726121    3116 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 10:39:36.728904    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 10:39:36.732730    3116 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 10:39:36.732895    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 10:39:36.997309    3116 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 10:39:37.006222    3116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:39:37.355036    3116 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:76 SystemTime:2024-02-26 10:39:37.3154399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Do
cker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:
<nil>}}
	I0226 10:39:37.361427    3116 out.go:177] * Using the docker driver based on existing profile
	I0226 10:39:37.363583    3116 start.go:299] selected driver: docker
	I0226 10:39:37.363583    3116 start.go:903] validating driver "docker" against &{Name:functional-366900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-366900 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:39:37.363583    3116 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 10:39:37.381533    3116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:39:37.714069    3116 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:76 SystemTime:2024-02-26 10:39:37.671977933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:39:37.820641    3116 cni.go:84] Creating CNI manager for ""
	I0226 10:39:37.820737    3116 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 10:39:37.820737    3116 start_flags.go:323] config:
	{Name:functional-366900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-366900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:39:37.826189    3116 out.go:177] * Starting control plane node functional-366900 in cluster functional-366900
	I0226 10:39:37.828790    3116 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 10:39:37.831655    3116 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 10:39:37.835398    3116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 10:39:37.835398    3116 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 10:39:37.835398    3116 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 10:39:37.835965    3116 cache.go:56] Caching tarball of preloaded images
	I0226 10:39:37.836411    3116 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 10:39:37.836571    3116 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0226 10:39:37.836771    3116 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\config.json ...
	I0226 10:39:38.011110    3116 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 10:39:38.011110    3116 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 10:39:38.011322    3116 cache.go:194] Successfully downloaded all kic artifacts
	I0226 10:39:38.011495    3116 start.go:365] acquiring machines lock for functional-366900: {Name:mk63720752490ac4c8d3ac42ffb00958b3fb0825 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 10:39:38.011774    3116 start.go:369] acquired machines lock for "functional-366900" in 207.3µs
	I0226 10:39:38.012030    3116 start.go:96] Skipping create...Using existing machine configuration
	I0226 10:39:38.012113    3116 fix.go:54] fixHost starting: 
	I0226 10:39:38.030270    3116 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
	I0226 10:39:38.199902    3116 fix.go:102] recreateIfNeeded on functional-366900: state=Running err=<nil>
	W0226 10:39:38.199930    3116 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 10:39:38.204328    3116 out.go:177] * Updating the running docker "functional-366900" container ...
	I0226 10:39:38.207671    3116 machine.go:88] provisioning docker machine ...
	I0226 10:39:38.207886    3116 ubuntu.go:169] provisioning hostname "functional-366900"
	I0226 10:39:38.216892    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:38.390077    3116 main.go:141] libmachine: Using SSH client type: native
	I0226 10:39:38.390641    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51485 <nil> <nil>}
	I0226 10:39:38.390641    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-366900 && echo "functional-366900" | sudo tee /etc/hostname
	I0226 10:39:38.588884    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-366900
	
	I0226 10:39:38.600068    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:38.765068    3116 main.go:141] libmachine: Using SSH client type: native
	I0226 10:39:38.765921    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51485 <nil> <nil>}
	I0226 10:39:38.765921    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-366900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-366900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-366900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 10:39:38.953244    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 10:39:38.953244    3116 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0226 10:39:38.953244    3116 ubuntu.go:177] setting up certificates
	I0226 10:39:38.953244    3116 provision.go:83] configureAuth start
	I0226 10:39:38.963567    3116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-366900
	I0226 10:39:39.130707    3116 provision.go:138] copyHostCerts
	I0226 10:39:39.131335    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0226 10:39:39.131692    3116 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0226 10:39:39.131773    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0226 10:39:39.131801    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0226 10:39:39.133050    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0226 10:39:39.133436    3116 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0226 10:39:39.133436    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0226 10:39:39.133828    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0226 10:39:39.134831    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0226 10:39:39.135167    3116 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0226 10:39:39.135167    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0226 10:39:39.135288    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0226 10:39:39.136405    3116 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-366900 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-366900]
	I0226 10:39:39.273625    3116 provision.go:172] copyRemoteCerts
	I0226 10:39:39.284510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 10:39:39.296664    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:39.458506    3116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
	I0226 10:39:39.590644    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0226 10:39:39.590860    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0226 10:39:39.631238    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0226 10:39:39.631958    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0226 10:39:39.670478    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0226 10:39:39.670875    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0226 10:39:39.713661    3116 provision.go:86] duration metric: configureAuth took 760.3351ms
	I0226 10:39:39.713661    3116 ubuntu.go:193] setting minikube options for container-runtime
	I0226 10:39:39.714262    3116 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 10:39:39.723379    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:39.880202    3116 main.go:141] libmachine: Using SSH client type: native
	I0226 10:39:39.880202    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51485 <nil> <nil>}
	I0226 10:39:39.880202    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 10:39:40.084122    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 10:39:40.084177    3116 ubuntu.go:71] root file system type: overlay
	I0226 10:39:40.084365    3116 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 10:39:40.093801    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:40.279202    3116 main.go:141] libmachine: Using SSH client type: native
	I0226 10:39:40.279871    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51485 <nil> <nil>}
	I0226 10:39:40.279992    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 10:39:40.490448    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 10:39:40.504962    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:40.676941    3116 main.go:141] libmachine: Using SSH client type: native
	I0226 10:39:40.676941    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51485 <nil> <nil>}
	I0226 10:39:40.676941    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 10:39:40.881732    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 10:39:40.881732    3116 machine.go:91] provisioned docker machine in 2.6740539s
	I0226 10:39:40.881732    3116 start.go:300] post-start starting for "functional-366900" (driver="docker")
	I0226 10:39:40.881732    3116 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 10:39:40.895386    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 10:39:40.903083    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:41.072531    3116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
	I0226 10:39:41.219955    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 10:39:41.231673    3116 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0226 10:39:41.231673    3116 command_runner.go:130] > NAME="Ubuntu"
	I0226 10:39:41.231673    3116 command_runner.go:130] > VERSION_ID="22.04"
	I0226 10:39:41.231673    3116 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0226 10:39:41.231673    3116 command_runner.go:130] > VERSION_CODENAME=jammy
	I0226 10:39:41.231673    3116 command_runner.go:130] > ID=ubuntu
	I0226 10:39:41.231673    3116 command_runner.go:130] > ID_LIKE=debian
	I0226 10:39:41.231673    3116 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0226 10:39:41.231673    3116 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0226 10:39:41.231673    3116 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0226 10:39:41.231673    3116 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0226 10:39:41.231673    3116 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0226 10:39:41.231673    3116 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 10:39:41.231673    3116 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 10:39:41.231673    3116 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 10:39:41.231673    3116 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 10:39:41.231673    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0226 10:39:41.232389    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0226 10:39:41.233402    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> 118682.pem in /etc/ssl/certs
	I0226 10:39:41.233402    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> /etc/ssl/certs/118682.pem
	I0226 10:39:41.233402    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\11868\hosts -> hosts in /etc/test/nested/copy/11868
	I0226 10:39:41.233402    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\11868\hosts -> /etc/test/nested/copy/11868/hosts
	I0226 10:39:41.247368    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11868
	I0226 10:39:41.267406    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /etc/ssl/certs/118682.pem (1708 bytes)
	I0226 10:39:41.305559    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\11868\hosts --> /etc/test/nested/copy/11868/hosts (40 bytes)
	I0226 10:39:41.343497    3116 start.go:303] post-start completed in 461.7637ms
	I0226 10:39:41.356351    3116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 10:39:41.363773    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:41.520913    3116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
	I0226 10:39:41.644479    3116 command_runner.go:130] > 1%!
	(MISSING)I0226 10:39:41.657537    3116 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 10:39:41.670847    3116 command_runner.go:130] > 952G
	I0226 10:39:41.670975    3116 fix.go:56] fixHost completed within 3.6589075s
	I0226 10:39:41.671012    3116 start.go:83] releasing machines lock for "functional-366900", held for 3.6591248s
	I0226 10:39:41.680596    3116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-366900
	I0226 10:39:41.848546    3116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 10:39:41.860155    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:41.861366    3116 ssh_runner.go:195] Run: cat /version.json
	I0226 10:39:41.869896    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:42.018385    3116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
	I0226 10:39:42.032971    3116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
	I0226 10:39:42.329761    3116 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "f5c3363394486b02d6f0e8fa364ea0d9cfb50289"}
	I0226 10:39:42.329918    3116 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0226 10:39:42.342983    3116 ssh_runner.go:195] Run: systemctl --version
	I0226 10:39:42.356069    3116 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0226 10:39:42.356069    3116 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0226 10:39:42.367939    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 10:39:42.380983    3116 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0226 10:39:42.380983    3116 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0226 10:39:42.380983    3116 command_runner.go:130] > Device: d9h/217d	Inode: 212         Links: 1
	I0226 10:39:42.380983    3116 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0226 10:39:42.380983    3116 command_runner.go:130] > Access: 2024-02-26 10:27:05.348123647 +0000
	I0226 10:39:42.380983    3116 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0226 10:39:42.380983    3116 command_runner.go:130] > Change: 2024-02-26 10:26:34.404506623 +0000
	I0226 10:39:42.380983    3116 command_runner.go:130] >  Birth: 2024-02-26 10:26:34.404506623 +0000
	I0226 10:39:42.392382    3116 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0226 10:39:42.409217    3116 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0226 10:39:42.411449    3116 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0226 10:39:42.422366    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 10:39:42.441865    3116 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0226 10:39:42.441898    3116 start.go:475] detecting cgroup driver to use...
	I0226 10:39:42.441950    3116 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 10:39:42.442153    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 10:39:42.470970    3116 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0226 10:39:42.483246    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0226 10:39:42.513623    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 10:39:42.535615    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 10:39:42.546566    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 10:39:42.583536    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 10:39:42.614176    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 10:39:42.645557    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 10:39:42.677215    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 10:39:42.710388    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 10:39:42.745591    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 10:39:42.764787    3116 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0226 10:39:42.779529    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 10:39:42.809415    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 10:39:42.993455    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 10:39:53.492997    3116 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.4994904s)
	I0226 10:39:53.493025    3116 start.go:475] detecting cgroup driver to use...
	I0226 10:39:53.493025    3116 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 10:39:53.506126    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 10:39:53.535582    3116 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0226 10:39:53.535582    3116 command_runner.go:130] > [Unit]
	I0226 10:39:53.535582    3116 command_runner.go:130] > Description=Docker Application Container Engine
	I0226 10:39:53.535582    3116 command_runner.go:130] > Documentation=https://docs.docker.com
	I0226 10:39:53.535582    3116 command_runner.go:130] > BindsTo=containerd.service
	I0226 10:39:53.535582    3116 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0226 10:39:53.535582    3116 command_runner.go:130] > Wants=network-online.target
	I0226 10:39:53.535714    3116 command_runner.go:130] > Requires=docker.socket
	I0226 10:39:53.535714    3116 command_runner.go:130] > StartLimitBurst=3
	I0226 10:39:53.535714    3116 command_runner.go:130] > StartLimitIntervalSec=60
	I0226 10:39:53.535714    3116 command_runner.go:130] > [Service]
	I0226 10:39:53.535784    3116 command_runner.go:130] > Type=notify
	I0226 10:39:53.535784    3116 command_runner.go:130] > Restart=on-failure
	I0226 10:39:53.535784    3116 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0226 10:39:53.535828    3116 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0226 10:39:53.535828    3116 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0226 10:39:53.535828    3116 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0226 10:39:53.535828    3116 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0226 10:39:53.535911    3116 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0226 10:39:53.535911    3116 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0226 10:39:53.535945    3116 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0226 10:39:53.535945    3116 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0226 10:39:53.535945    3116 command_runner.go:130] > ExecStart=
	I0226 10:39:53.536046    3116 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0226 10:39:53.536046    3116 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0226 10:39:53.536087    3116 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0226 10:39:53.536087    3116 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0226 10:39:53.536087    3116 command_runner.go:130] > LimitNOFILE=infinity
	I0226 10:39:53.536087    3116 command_runner.go:130] > LimitNPROC=infinity
	I0226 10:39:53.536087    3116 command_runner.go:130] > LimitCORE=infinity
	I0226 10:39:53.536164    3116 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0226 10:39:53.536164    3116 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0226 10:39:53.536164    3116 command_runner.go:130] > TasksMax=infinity
	I0226 10:39:53.536164    3116 command_runner.go:130] > TimeoutStartSec=0
	I0226 10:39:53.536164    3116 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0226 10:39:53.536229    3116 command_runner.go:130] > Delegate=yes
	I0226 10:39:53.536229    3116 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0226 10:39:53.536229    3116 command_runner.go:130] > KillMode=process
	I0226 10:39:53.536229    3116 command_runner.go:130] > [Install]
	I0226 10:39:53.536229    3116 command_runner.go:130] > WantedBy=multi-user.target
	I0226 10:39:53.536316    3116 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 10:39:53.546993    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 10:39:53.573953    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 10:39:53.604812    3116 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0226 10:39:53.617464    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0226 10:39:53.637691    3116 command_runner.go:130] > /usr/bin/cri-dockerd
	I0226 10:39:53.652864    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 10:39:53.673445    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 10:39:53.723445    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 10:39:53.907186    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 10:39:54.090221    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 10:39:54.090221    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 10:39:54.135375    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 10:39:54.374452    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 10:39:55.085156    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0226 10:39:55.120340    3116 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0226 10:39:55.167552    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 10:39:55.202251    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0226 10:39:55.402226    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0226 10:39:55.571114    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 10:39:55.713891    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0226 10:39:55.753829    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 10:39:55.787363    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 10:39:55.943590    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0226 10:39:56.096734    3116 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0226 10:39:56.109099    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0226 10:39:56.123575    3116 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0226 10:39:56.123575    3116 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0226 10:39:56.123575    3116 command_runner.go:130] > Device: e2h/226d	Inode: 686         Links: 1
	I0226 10:39:56.123575    3116 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0226 10:39:56.123575    3116 command_runner.go:130] > Access: 2024-02-26 10:39:55.956280756 +0000
	I0226 10:39:56.123575    3116 command_runner.go:130] > Modify: 2024-02-26 10:39:55.956280756 +0000
	I0226 10:39:56.123575    3116 command_runner.go:130] > Change: 2024-02-26 10:39:55.966280369 +0000
	I0226 10:39:56.123575    3116 command_runner.go:130] >  Birth: -
	I0226 10:39:56.123575    3116 start.go:543] Will wait 60s for crictl version
	I0226 10:39:56.136780    3116 ssh_runner.go:195] Run: which crictl
	I0226 10:39:56.147796    3116 command_runner.go:130] > /usr/bin/crictl
	I0226 10:39:56.159050    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 10:39:56.244762    3116 command_runner.go:130] > Version:  0.1.0
	I0226 10:39:56.244762    3116 command_runner.go:130] > RuntimeName:  docker
	I0226 10:39:56.244762    3116 command_runner.go:130] > RuntimeVersion:  25.0.3
	I0226 10:39:56.244762    3116 command_runner.go:130] > RuntimeApiVersion:  v1
	I0226 10:39:56.244762    3116 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0226 10:39:56.254498    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 10:39:56.299965    3116 command_runner.go:130] > 25.0.3
	I0226 10:39:56.310364    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 10:39:56.356196    3116 command_runner.go:130] > 25.0.3
	I0226 10:39:56.359703    3116 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0226 10:39:56.370032    3116 cli_runner.go:164] Run: docker exec -t functional-366900 dig +short host.docker.internal
	I0226 10:39:56.643593    3116 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 10:39:56.654532    3116 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 10:39:56.668151    3116 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0226 10:39:56.676140    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:56.830676    3116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 10:39:56.840547    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 10:39:56.879262    3116 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0226 10:39:56.879262    3116 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0226 10:39:56.879262    3116 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0226 10:39:56.879262    3116 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0226 10:39:56.879262    3116 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0226 10:39:56.879262    3116 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0226 10:39:56.879262    3116 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0226 10:39:56.879262    3116 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 10:39:56.880407    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 10:39:56.880526    3116 docker.go:615] Images already preloaded, skipping extraction
	I0226 10:39:56.889497    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 10:39:56.926383    3116 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0226 10:39:56.926383    3116 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0226 10:39:56.926383    3116 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0226 10:39:56.926383    3116 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0226 10:39:56.926383    3116 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0226 10:39:56.926383    3116 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0226 10:39:56.926383    3116 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0226 10:39:56.926383    3116 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 10:39:56.926383    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 10:39:56.926383    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0226 10:39:56.936012    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 10:39:57.039474    3116 command_runner.go:130] > cgroupfs
	I0226 10:39:57.039997    3116 cni.go:84] Creating CNI manager for ""
	I0226 10:39:57.040055    3116 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 10:39:57.040055    3116 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 10:39:57.040055    3116 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-366900 NodeName:functional-366900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 10:39:57.040055    3116 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-366900"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 10:39:57.040055    3116 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-366900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-366900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0226 10:39:57.053150    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 10:39:57.075825    3116 command_runner.go:130] > kubeadm
	I0226 10:39:57.075825    3116 command_runner.go:130] > kubectl
	I0226 10:39:57.075825    3116 command_runner.go:130] > kubelet
	I0226 10:39:57.075944    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 10:39:57.088987    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 10:39:57.106914    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0226 10:39:57.137745    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 10:39:57.164004    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0226 10:39:57.204693    3116 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0226 10:39:57.218043    3116 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0226 10:39:57.218043    3116 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900 for IP: 192.168.49.2
	I0226 10:39:57.218043    3116 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:39:57.218787    3116 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 10:39:57.218787    3116 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0226 10:39:57.219527    3116 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.key
	I0226 10:39:57.219527    3116 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\apiserver.key.dd3b5fb2
	I0226 10:39:57.220243    3116 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\proxy-client.key
	I0226 10:39:57.220243    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0226 10:39:57.220477    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0226 10:39:57.220642    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0226 10:39:57.220801    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0226 10:39:57.220801    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0226 10:39:57.220801    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0226 10:39:57.220801    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0226 10:39:57.220801    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0226 10:39:57.221414    3116 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 10:39:57.221414    3116 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 10:39:57.221414    3116 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 10:39:57.222322    3116 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 10:39:57.222513    3116 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 10:39:57.222513    3116 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 10:39:57.223141    3116 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 10:39:57.223469    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:39:57.223635    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem -> /usr/share/ca-certificates/11868.pem
	I0226 10:39:57.223635    3116 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> /usr/share/ca-certificates/118682.pem
	I0226 10:39:57.224247    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 10:39:57.259921    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 10:39:57.298058    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 10:39:57.335474    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 10:39:57.372776    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 10:39:57.412618    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 10:39:57.458488    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 10:39:57.496158    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 10:39:57.533541    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 10:39:57.573608    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 10:39:57.612876    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 10:39:57.652896    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 10:39:57.695136    3116 ssh_runner.go:195] Run: openssl version
	I0226 10:39:57.709977    3116 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0226 10:39:57.721314    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 10:39:57.751611    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:39:57.763137    3116 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:39:57.763137    3116 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:39:57.775306    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:39:57.789982    3116 command_runner.go:130] > b5213941
	I0226 10:39:57.799978    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 10:39:57.828195    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 10:39:57.857543    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 10:39:57.869792    3116 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 10:39:57.869792    3116 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 10:39:57.879608    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 10:39:57.895058    3116 command_runner.go:130] > 51391683
	I0226 10:39:57.905110    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 10:39:57.932814    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 10:39:57.963945    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 10:39:57.974461    3116 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 10:39:57.974461    3116 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 10:39:57.984468    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 10:39:57.999031    3116 command_runner.go:130] > 3ec20f2e
	I0226 10:39:58.009606    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 10:39:58.038441    3116 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 10:39:58.048899    3116 command_runner.go:130] > ca.crt
	I0226 10:39:58.048899    3116 command_runner.go:130] > ca.key
	I0226 10:39:58.048899    3116 command_runner.go:130] > healthcheck-client.crt
	I0226 10:39:58.048899    3116 command_runner.go:130] > healthcheck-client.key
	I0226 10:39:58.048899    3116 command_runner.go:130] > peer.crt
	I0226 10:39:58.048899    3116 command_runner.go:130] > peer.key
	I0226 10:39:58.048899    3116 command_runner.go:130] > server.crt
	I0226 10:39:58.048899    3116 command_runner.go:130] > server.key
	I0226 10:39:58.058745    3116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 10:39:58.073855    3116 command_runner.go:130] > Certificate will not expire
	I0226 10:39:58.083419    3116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 10:39:58.099520    3116 command_runner.go:130] > Certificate will not expire
	I0226 10:39:58.110364    3116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 10:39:58.125354    3116 command_runner.go:130] > Certificate will not expire
	I0226 10:39:58.135482    3116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 10:39:58.149533    3116 command_runner.go:130] > Certificate will not expire
	I0226 10:39:58.159569    3116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 10:39:58.174317    3116 command_runner.go:130] > Certificate will not expire
	I0226 10:39:58.185885    3116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 10:39:58.200779    3116 command_runner.go:130] > Certificate will not expire
	I0226 10:39:58.201315    3116 kubeadm.go:404] StartCluster: {Name:functional-366900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-366900 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:39:58.209240    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 10:39:58.263529    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 10:39:58.281984    3116 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0226 10:39:58.282985    3116 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0226 10:39:58.282985    3116 command_runner.go:130] > /var/lib/minikube/etcd:
	I0226 10:39:58.282985    3116 command_runner.go:130] > member
	I0226 10:39:58.283063    3116 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 10:39:58.283137    3116 kubeadm.go:636] restartCluster start
	I0226 10:39:58.293811    3116 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 10:39:58.309721    3116 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 10:39:58.319256    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-366900
	I0226 10:39:58.473668    3116 kubeconfig.go:92] found "functional-366900" server: "https://127.0.0.1:51489"
	I0226 10:39:58.475277    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:39:58.476452    3116 kapi.go:59] client config for functional-366900: &rest.Config{Host:"https://127.0.0.1:51489", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-366900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-366900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x251e0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 10:39:58.478047    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0226 10:39:58.488557    3116 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 10:39:58.507881    3116 api_server.go:166] Checking apiserver status ...
	I0226 10:39:58.518871    3116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 10:39:58.540156    3116 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 10:39:59.016186    3116 api_server.go:166] Checking apiserver status ...
	I0226 10:39:59.026963    3116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 10:39:59.050351    3116 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 10:39:59.520324    3116 api_server.go:166] Checking apiserver status ...
	I0226 10:39:59.531614    3116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 10:39:59.554059    3116 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 10:40:00.008054    3116 api_server.go:166] Checking apiserver status ...
	I0226 10:40:00.021043    3116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 10:40:00.139615    3116 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 10:40:00.508758    3116 api_server.go:166] Checking apiserver status ...
	I0226 10:40:00.520033    3116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 10:40:00.642498    3116 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 10:40:01.015001    3116 api_server.go:166] Checking apiserver status ...
	I0226 10:40:01.029231    3116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 10:40:01.057857    3116 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 10:40:01.514237    3116 api_server.go:166] Checking apiserver status ...
	I0226 10:40:01.525060    3116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 10:40:01.640460    3116 command_runner.go:130] > 6036
	I0226 10:40:01.658101    3116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6036/cgroup
	I0226 10:40:01.748597    3116 command_runner.go:130] > 21:freezer:/docker/a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71/kubepods/burstable/pod08db092730c2aa8c12611f56fef07b02/f473f580abbe2f922e9b7d85932878641c20af546afbf7ae98266e6684596ba8
	I0226 10:40:01.749253    3116 api_server.go:182] apiserver freezer: "21:freezer:/docker/a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71/kubepods/burstable/pod08db092730c2aa8c12611f56fef07b02/f473f580abbe2f922e9b7d85932878641c20af546afbf7ae98266e6684596ba8"
	I0226 10:40:01.764901    3116 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a8ef7f3228858d1a08402071dd67889907934ac053f9580c5a398793d252de71/kubepods/burstable/pod08db092730c2aa8c12611f56fef07b02/f473f580abbe2f922e9b7d85932878641c20af546afbf7ae98266e6684596ba8/freezer.state
	I0226 10:40:01.935756    3116 command_runner.go:130] > THAWED
	I0226 10:40:01.935859    3116 api_server.go:204] freezer state: "THAWED"
	I0226 10:40:01.935859    3116 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51489/healthz ...
	I0226 10:40:01.941451    3116 api_server.go:269] stopped: https://127.0.0.1:51489/healthz: Get "https://127.0.0.1:51489/healthz": EOF
	I0226 10:40:01.941451    3116 retry.go:31] will retry after 303.814998ms: state is "Stopped"
	I0226 10:40:02.249882    3116 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51489/healthz ...
	I0226 10:40:02.254469    3116 api_server.go:269] stopped: https://127.0.0.1:51489/healthz: Get "https://127.0.0.1:51489/healthz": EOF
	I0226 10:40:02.254520    3116 retry.go:31] will retry after 378.249976ms: state is "Stopped"
	I0226 10:40:02.643059    3116 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51489/healthz ...
	I0226 10:40:06.241151    3116 api_server.go:279] https://127.0.0.1:51489/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 10:40:06.241859    3116 retry.go:31] will retry after 331.403048ms: https://127.0.0.1:51489/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 10:40:06.588256    3116 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51489/healthz ...
	I0226 10:40:06.646388    3116 api_server.go:279] https://127.0.0.1:51489/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 10:40:06.646388    3116 retry.go:31] will retry after 501.703998ms: https://127.0.0.1:51489/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 10:40:07.152391    3116 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51489/healthz ...
	I0226 10:40:07.168244    3116 api_server.go:279] https://127.0.0.1:51489/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 10:40:07.168305    3116 retry.go:31] will retry after 477.458272ms: https://127.0.0.1:51489/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 10:40:07.656426    3116 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51489/healthz ...
	I0226 10:40:07.671747    3116 api_server.go:279] https://127.0.0.1:51489/healthz returned 200:
	ok
	I0226 10:40:07.672499    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods
	I0226 10:40:07.672635    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:07.672775    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:07.672794    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:07.697236    3116 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0226 10:40:07.697236    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:07.697236    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:07.697236    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:07.697236    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:07 GMT
	I0226 10:40:07.697236    3116 round_trippers.go:580]     Audit-Id: 045a4e81-0081-47fb-8f54-82668b8b2cfc
	I0226 10:40:07.697236    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:07.697236    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:07.698200    3116 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"442"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50665 chars]
	I0226 10:40:07.704202    3116 system_pods.go:86] 7 kube-system pods found
	I0226 10:40:07.704268    3116 system_pods.go:89] "coredns-5dd5756b68-xnmfr" [748f3faf-1bf6-4894-aa38-e45189b52880] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0226 10:40:07.704268    3116 system_pods.go:89] "etcd-functional-366900" [a927a668-2a96-436d-9eae-c0e5178b026d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0226 10:40:07.704333    3116 system_pods.go:89] "kube-apiserver-functional-366900" [e1d69c97-977d-4891-9f40-de9843e731c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 10:40:07.704333    3116 system_pods.go:89] "kube-controller-manager-functional-366900" [959853ef-9603-48a6-ab33-8d3b94ec6c8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0226 10:40:07.704333    3116 system_pods.go:89] "kube-proxy-k75mq" [aa4b9ca9-e541-47ee-8fe3-31fb2382a212] Running
	I0226 10:40:07.704399    3116 system_pods.go:89] "kube-scheduler-functional-366900" [8f484dd7-9b37-430b-b66e-845e280c54ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0226 10:40:07.704432    3116 system_pods.go:89] "storage-provisioner" [8af976fb-796a-4d3b-a3db-c54011d75859] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0226 10:40:07.704495    3116 round_trippers.go:463] GET https://127.0.0.1:51489/version
	I0226 10:40:07.704561    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:07.704561    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:07.704561    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:07.708095    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0226 10:40:07.708095    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:07.708095    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:07.708095    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:07.708095    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:07.708095    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:07.708095    3116 round_trippers.go:580]     Content-Length: 264
	I0226 10:40:07.708095    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:07 GMT
	I0226 10:40:07.708095    3116 round_trippers.go:580]     Audit-Id: ebfe7256-cabd-4a7e-90f8-cabfa0c93592
	I0226 10:40:07.708095    3116 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0226 10:40:07.708095    3116 api_server.go:141] control plane version: v1.28.4
	I0226 10:40:07.708095    3116 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0226 10:40:07.708095    3116 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0226 10:40:07.708623    3116 kubeadm.go:640] restartCluster took 9.4254269s
	I0226 10:40:07.708713    3116 kubeadm.go:406] StartCluster complete in 9.507283s
	I0226 10:40:07.708713    3116 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:40:07.708831    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:40:07.709766    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:40:07.711067    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 10:40:07.711067    3116 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 10:40:07.711067    3116 addons.go:69] Setting default-storageclass=true in profile "functional-366900"
	I0226 10:40:07.711067    3116 addons.go:69] Setting storage-provisioner=true in profile "functional-366900"
	I0226 10:40:07.711617    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-366900"
	I0226 10:40:07.711617    3116 addons.go:234] Setting addon storage-provisioner=true in "functional-366900"
	W0226 10:40:07.711740    3116 addons.go:243] addon storage-provisioner should already be in state true
	I0226 10:40:07.711998    3116 host.go:66] Checking if "functional-366900" exists ...
	I0226 10:40:07.712133    3116 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 10:40:07.727380    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:40:07.728032    3116 kapi.go:59] client config for functional-366900: &rest.Config{Host:"https://127.0.0.1:51489", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-366900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-366900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x251e0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 10:40:07.729346    3116 round_trippers.go:463] GET https://127.0.0.1:51489/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0226 10:40:07.729346    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:07.729346    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:07.729346    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:07.733873    3116 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
	I0226 10:40:07.737483    3116 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
	I0226 10:40:07.739523    3116 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0226 10:40:07.739602    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:07.739602    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:07.739602    3116 round_trippers.go:580]     Content-Length: 291
	I0226 10:40:07.739602    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:07 GMT
	I0226 10:40:07.739602    3116 round_trippers.go:580]     Audit-Id: 38dee95f-e218-4c73-82e6-a6cf5dfb5f19
	I0226 10:40:07.739602    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:07.739602    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:07.739602    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:07.739602    3116 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dede8b1b-e875-4337-a849-cd56e76c5c08","resourceVersion":"416","creationTimestamp":"2024-02-26T10:39:01Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0226 10:40:07.739602    3116 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-366900" context rescaled to 1 replicas
	I0226 10:40:07.739602    3116 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 10:40:07.744893    3116 out.go:177] * Verifying Kubernetes components...
	I0226 10:40:07.762954    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 10:40:07.855943    3116 command_runner.go:130] > apiVersion: v1
	I0226 10:40:07.855943    3116 command_runner.go:130] > data:
	I0226 10:40:07.855943    3116 command_runner.go:130] >   Corefile: |
	I0226 10:40:07.855943    3116 command_runner.go:130] >     .:53 {
	I0226 10:40:07.855943    3116 command_runner.go:130] >         log
	I0226 10:40:07.855943    3116 command_runner.go:130] >         errors
	I0226 10:40:07.855943    3116 command_runner.go:130] >         health {
	I0226 10:40:07.855943    3116 command_runner.go:130] >            lameduck 5s
	I0226 10:40:07.855943    3116 command_runner.go:130] >         }
	I0226 10:40:07.855943    3116 command_runner.go:130] >         ready
	I0226 10:40:07.855943    3116 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0226 10:40:07.855943    3116 command_runner.go:130] >            pods insecure
	I0226 10:40:07.855943    3116 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0226 10:40:07.855943    3116 command_runner.go:130] >            ttl 30
	I0226 10:40:07.855943    3116 command_runner.go:130] >         }
	I0226 10:40:07.855943    3116 command_runner.go:130] >         prometheus :9153
	I0226 10:40:07.855943    3116 command_runner.go:130] >         hosts {
	I0226 10:40:07.855943    3116 command_runner.go:130] >            192.168.65.254 host.minikube.internal
	I0226 10:40:07.855943    3116 command_runner.go:130] >            fallthrough
	I0226 10:40:07.855943    3116 command_runner.go:130] >         }
	I0226 10:40:07.855943    3116 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0226 10:40:07.855943    3116 command_runner.go:130] >            max_concurrent 1000
	I0226 10:40:07.855943    3116 command_runner.go:130] >         }
	I0226 10:40:07.855943    3116 command_runner.go:130] >         cache 30
	I0226 10:40:07.855943    3116 command_runner.go:130] >         loop
	I0226 10:40:07.855943    3116 command_runner.go:130] >         reload
	I0226 10:40:07.855943    3116 command_runner.go:130] >         loadbalance
	I0226 10:40:07.855943    3116 command_runner.go:130] >     }
	I0226 10:40:07.855943    3116 command_runner.go:130] > kind: ConfigMap
	I0226 10:40:07.855943    3116 command_runner.go:130] > metadata:
	I0226 10:40:07.855943    3116 command_runner.go:130] >   creationTimestamp: "2024-02-26T10:39:01Z"
	I0226 10:40:07.855943    3116 command_runner.go:130] >   name: coredns
	I0226 10:40:07.855943    3116 command_runner.go:130] >   namespace: kube-system
	I0226 10:40:07.855943    3116 command_runner.go:130] >   resourceVersion: "364"
	I0226 10:40:07.855943    3116 command_runner.go:130] >   uid: c54295c5-f03b-430f-897d-a7e371c31276
	I0226 10:40:07.855943    3116 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0226 10:40:07.864957    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-366900
	I0226 10:40:07.901963    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 10:40:07.903940    3116 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 10:40:07.903940    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 10:40:07.913953    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:40:07.915950    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:40:07.915950    3116 kapi.go:59] client config for functional-366900: &rest.Config{Host:"https://127.0.0.1:51489", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-366900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-366900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x251e0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 10:40:07.916949    3116 addons.go:234] Setting addon default-storageclass=true in "functional-366900"
	W0226 10:40:07.916949    3116 addons.go:243] addon default-storageclass should already be in state true
	I0226 10:40:07.916949    3116 host.go:66] Checking if "functional-366900" exists ...
	I0226 10:40:07.934943    3116 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
	I0226 10:40:08.026273    3116 node_ready.go:35] waiting up to 6m0s for node "functional-366900" to be "Ready" ...
	I0226 10:40:08.026273    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:08.026273    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:08.026273    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:08.026273    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:08.035726    3116 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0226 10:40:08.035726    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:08.035726    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:08 GMT
	I0226 10:40:08.035726    3116 round_trippers.go:580]     Audit-Id: 5c6351bf-64b3-423e-bcba-0dabb5648829
	I0226 10:40:08.035726    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:08.035726    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:08.035726    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:08.035726    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:08.035726    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:08.036560    3116 node_ready.go:49] node "functional-366900" has status "Ready":"True"
	I0226 10:40:08.036560    3116 node_ready.go:38] duration metric: took 10.2869ms waiting for node "functional-366900" to be "Ready" ...
	I0226 10:40:08.036560    3116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 10:40:08.036560    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods
	I0226 10:40:08.036560    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:08.036560    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:08.036560    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:08.044566    3116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0226 10:40:08.044566    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:08.044566    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:08.044566    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:08.044566    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:08.044566    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:08 GMT
	I0226 10:40:08.044566    3116 round_trippers.go:580]     Audit-Id: 284abcbe-a459-4b59-908c-cc66d66c900f
	I0226 10:40:08.044566    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:08.046569    3116 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"442"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50665 chars]
	I0226 10:40:08.049562    3116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xnmfr" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:08.049562    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:08.049562    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:08.049562    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:08.049562    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:08.055563    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:08.055563    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:08.055563    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:08.055563    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:08.055563    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:08 GMT
	I0226 10:40:08.055563    3116 round_trippers.go:580]     Audit-Id: 58a5a7b6-8296-4e72-97d2-7a41fb33c58a
	I0226 10:40:08.055563    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:08.055563    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:08.056565    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:08.056565    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:08.056565    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:08.056565    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:08.056565    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:08.061574    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:08.061574    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:08.061574    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:08 GMT
	I0226 10:40:08.061574    3116 round_trippers.go:580]     Audit-Id: e9a53363-a27e-4bdc-93fb-8f79455c4086
	I0226 10:40:08.061574    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:08.061574    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:08.061574    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:08.061574    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:08.062563    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:08.089569    3116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
	I0226 10:40:08.104590    3116 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 10:40:08.104590    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 10:40:08.114561    3116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
	I0226 10:40:08.241771    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 10:40:08.276771    3116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
	I0226 10:40:08.439342    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 10:40:08.553328    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:08.553328    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:08.553328    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:08.553328    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:08.559843    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:08.559956    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:08.559956    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:08.560012    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:08.560012    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:08 GMT
	I0226 10:40:08.560050    3116 round_trippers.go:580]     Audit-Id: 1d2d67c1-53e3-482c-a629-7485822596d0
	I0226 10:40:08.560050    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:08.560086    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:08.560293    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:08.560956    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:08.560956    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:08.560956    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:08.560956    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:08.565664    3116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0226 10:40:08.565664    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:08.565664    3116 round_trippers.go:580]     Audit-Id: 7566a0de-85d3-4608-975d-12d1f813b107
	I0226 10:40:08.565664    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:08.565664    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:08.565664    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:08.565664    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:08.565664    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:08 GMT
	I0226 10:40:08.566858    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:09.062339    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:09.062339    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:09.062339    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:09.062339    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:09.069660    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:09.069660    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:09.069660    3116 round_trippers.go:580]     Audit-Id: 5795cd80-91c4-45c0-bfc9-43c3ba6de3cd
	I0226 10:40:09.069660    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:09.069660    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:09.069660    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:09.069660    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:09.069660    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:09 GMT
	I0226 10:40:09.070528    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:09.071229    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:09.071229    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:09.071229    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:09.071229    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:09.078287    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:09.078415    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:09.078415    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:09.078415    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:09.078504    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:09.078504    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:09.078562    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:09 GMT
	I0226 10:40:09.078613    3116 round_trippers.go:580]     Audit-Id: 77dff22b-0a59-4be1-a64f-5dd68b81cb74
	I0226 10:40:09.080385    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:09.405936    3116 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0226 10:40:09.405936    3116 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0226 10:40:09.405936    3116 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0226 10:40:09.405936    3116 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0226 10:40:09.405936    3116 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0226 10:40:09.405936    3116 command_runner.go:130] > pod/storage-provisioner configured
	I0226 10:40:09.406457    3116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1646831s)
	I0226 10:40:09.406534    3116 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0226 10:40:09.406731    3116 round_trippers.go:463] GET https://127.0.0.1:51489/apis/storage.k8s.io/v1/storageclasses
	I0226 10:40:09.406731    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:09.406791    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:09.406825    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:09.412561    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:09.412561    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:09.412561    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:09.412561    3116 round_trippers.go:580]     Content-Length: 1273
	I0226 10:40:09.412561    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:09 GMT
	I0226 10:40:09.412561    3116 round_trippers.go:580]     Audit-Id: 63fa5cec-4238-4bb7-b760-284ec80e9e7f
	I0226 10:40:09.412561    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:09.412561    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:09.412561    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:09.412561    3116 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"442"},"items":[{"metadata":{"name":"standard","uid":"10f45399-7654-41cf-acdd-96ba1a5197bf","resourceVersion":"362","creationTimestamp":"2024-02-26T10:39:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-26T10:39:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0226 10:40:09.413770    3116 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"10f45399-7654-41cf-acdd-96ba1a5197bf","resourceVersion":"362","creationTimestamp":"2024-02-26T10:39:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-26T10:39:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0226 10:40:09.413770    3116 round_trippers.go:463] PUT https://127.0.0.1:51489/apis/storage.k8s.io/v1/storageclasses/standard
	I0226 10:40:09.413770    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:09.413770    3116 round_trippers.go:473]     Content-Type: application/json
	I0226 10:40:09.413770    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:09.413770    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:09.420385    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:09.420385    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:09.420385    3116 round_trippers.go:580]     Content-Length: 1220
	I0226 10:40:09.420385    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:09 GMT
	I0226 10:40:09.420385    3116 round_trippers.go:580]     Audit-Id: 7aef5501-c75e-464a-8071-b50d118ce1cb
	I0226 10:40:09.420385    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:09.420385    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:09.420385    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:09.420385    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:09.420385    3116 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"10f45399-7654-41cf-acdd-96ba1a5197bf","resourceVersion":"362","creationTimestamp":"2024-02-26T10:39:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-26T10:39:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0226 10:40:09.436964    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0226 10:40:09.438535    3116 addons.go:505] enable addons completed in 1.7274634s: enabled=[storage-provisioner default-storageclass]
	I0226 10:40:09.565596    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:09.565596    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:09.565596    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:09.565596    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:09.571530    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:09.571530    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:09.571530    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:09 GMT
	I0226 10:40:09.571530    3116 round_trippers.go:580]     Audit-Id: 713bc000-cea6-46c2-84ce-5bcc356d938f
	I0226 10:40:09.571530    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:09.571530    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:09.571530    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:09.571530    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:09.572522    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:09.572975    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:09.572975    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:09.572975    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:09.572975    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:09.578250    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:09.578346    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:09.578380    3116 round_trippers.go:580]     Audit-Id: 655cb727-8e48-45dd-90da-cdbb48fe2507
	I0226 10:40:09.578380    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:09.578380    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:09.578380    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:09.578409    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:09.578441    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:09 GMT
	I0226 10:40:09.578630    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:10.057729    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:10.057729    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:10.057863    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:10.057863    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:10.064741    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:10.064741    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:10.064741    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:10.064741    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:10.064741    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:10 GMT
	I0226 10:40:10.064741    3116 round_trippers.go:580]     Audit-Id: c22022b5-a49d-4aeb-a38c-5e90c61a388f
	I0226 10:40:10.064741    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:10.064741    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:10.064741    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:10.065746    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:10.065832    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:10.065832    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:10.065832    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:10.073649    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:10.073649    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:10.073649    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:10.073649    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:10 GMT
	I0226 10:40:10.073649    3116 round_trippers.go:580]     Audit-Id: 096ca36f-02c4-4b6b-bc0d-2c6347a6aaa3
	I0226 10:40:10.073649    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:10.073649    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:10.073649    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:10.073649    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:10.073649    3116 pod_ready.go:102] pod "coredns-5dd5756b68-xnmfr" in "kube-system" namespace has status "Ready":"False"
	I0226 10:40:10.557603    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:10.557677    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:10.557677    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:10.557677    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:10.565912    3116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0226 10:40:10.565912    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:10.565912    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:10 GMT
	I0226 10:40:10.565912    3116 round_trippers.go:580]     Audit-Id: c067b1f1-15c0-415e-8b4a-a9460eb78323
	I0226 10:40:10.565912    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:10.566444    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:10.566444    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:10.566444    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:10.566754    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:10.567034    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:10.567034    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:10.567034    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:10.567034    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:10.575842    3116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0226 10:40:10.575842    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:10.575842    3116 round_trippers.go:580]     Audit-Id: f1a8996f-e2d2-496f-83be-c36fd492a627
	I0226 10:40:10.575965    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:10.575965    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:10.575965    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:10.575965    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:10.575965    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:10 GMT
	I0226 10:40:10.576166    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:11.061496    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:11.061496    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:11.061780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:11.061780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:11.067564    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:11.067564    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:11.067564    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:11 GMT
	I0226 10:40:11.067564    3116 round_trippers.go:580]     Audit-Id: 2d33e6b3-dfef-47e2-8323-3dcb4c851616
	I0226 10:40:11.067564    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:11.067564    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:11.067564    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:11.067564    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:11.068276    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:11.068885    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:11.068885    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:11.068885    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:11.068885    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:11.075348    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:11.075348    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:11.075348    3116 round_trippers.go:580]     Audit-Id: 4c2230e8-22a3-4d2e-8b17-d53535083071
	I0226 10:40:11.075348    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:11.075348    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:11.075348    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:11.075348    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:11.075348    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:11 GMT
	I0226 10:40:11.075981    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:11.563845    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:11.563845    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:11.563845    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:11.563845    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:11.585015    3116 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0226 10:40:11.585015    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:11.585015    3116 round_trippers.go:580]     Audit-Id: b42a1ab0-2f97-48e5-9d9f-7fe07c2ddf8c
	I0226 10:40:11.585015    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:11.585015    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:11.585015    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:11.585015    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:11.585015    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:11 GMT
	I0226 10:40:11.585546    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:11.587111    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:11.587223    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:11.587433    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:11.587542    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:11.596335    3116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0226 10:40:11.596335    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:11.596335    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:11.596335    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:11.596335    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:11 GMT
	I0226 10:40:11.596335    3116 round_trippers.go:580]     Audit-Id: 5f827a83-1caf-4cca-a2e7-b60bcd86eb73
	I0226 10:40:11.596335    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:11.596335    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:11.596335    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:12.050598    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:12.050663    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:12.050663    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:12.050663    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:12.057940    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:12.057940    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:12.057940    3116 round_trippers.go:580]     Audit-Id: e0e474f6-2e48-4a72-b8c8-75cabbef1296
	I0226 10:40:12.057940    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:12.057940    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:12.057940    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:12.057940    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:12.057940    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:12 GMT
	I0226 10:40:12.057940    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:12.058635    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:12.058635    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:12.058635    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:12.058635    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:12.066235    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:12.066235    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:12.066235    3116 round_trippers.go:580]     Audit-Id: b7d3055a-11c1-4bbd-9080-27612ed07b60
	I0226 10:40:12.066235    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:12.066235    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:12.066235    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:12.066235    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:12.066235    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:12 GMT
	I0226 10:40:12.066923    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:12.563817    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:12.563817    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:12.563817    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:12.563817    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:12.570949    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:12.570949    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:12.570949    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:12.570949    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:12.570949    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:12.571490    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:12.571490    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:12 GMT
	I0226 10:40:12.571490    3116 round_trippers.go:580]     Audit-Id: 280ce825-7317-4145-a180-6f7c6565ab31
	I0226 10:40:12.571779    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:12.572103    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:12.572103    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:12.572103    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:12.572103    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:12.578648    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:12.578648    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:12.578648    3116 round_trippers.go:580]     Audit-Id: 8515637c-b4c8-4b2b-b4f2-3658aa46ea03
	I0226 10:40:12.578648    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:12.578648    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:12.578648    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:12.578648    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:12.578648    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:12 GMT
	I0226 10:40:12.578648    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:12.579603    3116 pod_ready.go:102] pod "coredns-5dd5756b68-xnmfr" in "kube-system" namespace has status "Ready":"False"
	I0226 10:40:13.065975    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:13.065975    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:13.065975    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:13.066037    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:13.073324    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:13.073324    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:13.073324    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:13 GMT
	I0226 10:40:13.073324    3116 round_trippers.go:580]     Audit-Id: ff6426ca-012d-4f96-9e9c-7d7bd0ee116c
	I0226 10:40:13.073324    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:13.073324    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:13.073324    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:13.073324    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:13.073924    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:13.074860    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:13.074860    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:13.074860    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:13.074860    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:13.082206    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:13.082206    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:13.082206    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:13.082206    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:13 GMT
	I0226 10:40:13.082206    3116 round_trippers.go:580]     Audit-Id: 89ac2345-86ba-49c7-ab65-59832edf3f3f
	I0226 10:40:13.082206    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:13.082206    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:13.082206    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:13.082856    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:13.564983    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:13.565181    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:13.565181    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:13.565181    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:13.574825    3116 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0226 10:40:13.574825    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:13.574825    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:13 GMT
	I0226 10:40:13.574825    3116 round_trippers.go:580]     Audit-Id: 72daf007-b6e7-4539-9351-d12969ff0b58
	I0226 10:40:13.574825    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:13.574825    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:13.574825    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:13.574825    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:13.574825    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:13.575840    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:13.575922    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:13.575922    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:13.575922    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:13.583737    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:13.583737    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:13.583737    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:13 GMT
	I0226 10:40:13.583737    3116 round_trippers.go:580]     Audit-Id: 421379a3-f297-435b-8c54-d0f9fe68a8c0
	I0226 10:40:13.583737    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:13.583737    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:13.583737    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:13.583737    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:13.583737    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:14.065101    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:14.065386    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:14.065386    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:14.065386    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:14.072331    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:14.072391    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:14.072391    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:14.072391    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:14.072391    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:14.072391    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:14.072391    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:14 GMT
	I0226 10:40:14.072391    3116 round_trippers.go:580]     Audit-Id: 52019499-7808-4a16-92e0-f6e952caeae2
	I0226 10:40:14.072931    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:14.073588    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:14.073732    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:14.073732    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:14.073732    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:14.079490    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:14.079490    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:14.079490    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:14.079490    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:14.079490    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:14.079490    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:14.079490    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:14 GMT
	I0226 10:40:14.079490    3116 round_trippers.go:580]     Audit-Id: 79309774-bea8-4e3b-8624-617c68323b5a
	I0226 10:40:14.079490    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:14.563706    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:14.563810    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:14.563810    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:14.563810    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:14.572012    3116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0226 10:40:14.572012    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:14.572012    3116 round_trippers.go:580]     Audit-Id: 1e8de616-218e-46e7-8657-489fb5360c8a
	I0226 10:40:14.572012    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:14.572012    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:14.572012    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:14.572012    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:14.572012    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:14 GMT
	I0226 10:40:14.572851    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"440","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0226 10:40:14.573738    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:14.573832    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:14.573895    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:14.573895    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:14.580682    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:14.580682    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:14.580682    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:14.580682    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:14.580682    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:14.580682    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:14.580682    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:14 GMT
	I0226 10:40:14.580682    3116 round_trippers.go:580]     Audit-Id: 99d7b5c7-84f0-44c2-9cd4-110fa3fe1991
	I0226 10:40:14.581798    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:14.582750    3116 pod_ready.go:102] pod "coredns-5dd5756b68-xnmfr" in "kube-system" namespace has status "Ready":"False"
	I0226 10:40:15.063422    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr
	I0226 10:40:15.063872    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:15.063872    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:15.063872    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:15.068969    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:15.068969    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:15.068969    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:15.068969    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:15.068969    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:15 GMT
	I0226 10:40:15.068969    3116 round_trippers.go:580]     Audit-Id: 2b0cb48b-4ea5-4bdd-830f-4a7ae133ed78
	I0226 10:40:15.068969    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:15.068969    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:15.069657    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"504","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5945 chars]
	I0226 10:40:15.070225    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:15.070308    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:15.070308    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:15.070366    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:15.090452    3116 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0226 10:40:15.090452    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:15.090452    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:15.090452    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:15.090452    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:15.090452    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:15 GMT
	I0226 10:40:15.090452    3116 round_trippers.go:580]     Audit-Id: 7a5df1cf-cdd7-402a-aac3-32ca9e1b7258
	I0226 10:40:15.090452    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:15.090452    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:15.091242    3116 pod_ready.go:92] pod "coredns-5dd5756b68-xnmfr" in "kube-system" namespace has status "Ready":"True"
	I0226 10:40:15.091242    3116 pod_ready.go:81] duration metric: took 7.0416618s waiting for pod "coredns-5dd5756b68-xnmfr" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:15.091242    3116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-366900" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:15.091242    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:15.091242    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:15.091242    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:15.091242    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:15.098247    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:15.098247    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:15.098247    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:15.098247    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:15.098247    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:15.098247    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:15.098247    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:15 GMT
	I0226 10:40:15.098247    3116 round_trippers.go:580]     Audit-Id: 0be61a83-3070-43c3-a67a-9d9561da32ce
	I0226 10:40:15.098793    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:15.099298    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:15.099298    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:15.099298    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:15.099298    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:15.104937    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:15.105238    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:15.105238    3116 round_trippers.go:580]     Audit-Id: 634b9a0c-e5a4-4298-989a-b77c4758a903
	I0226 10:40:15.105294    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:15.105294    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:15.105294    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:15.105294    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:15.105294    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:15 GMT
	I0226 10:40:15.106868    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:15.594504    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:15.594822    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:15.594822    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:15.594822    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:15.603683    3116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0226 10:40:15.603758    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:15.603833    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:15.603871    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:15.603871    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:15.603871    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:15 GMT
	I0226 10:40:15.603968    3116 round_trippers.go:580]     Audit-Id: 3e879078-2170-4481-8434-5e3667354fd6
	I0226 10:40:15.604025    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:15.604328    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:15.605172    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:15.605228    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:15.605228    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:15.605292    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:15.619109    3116 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0226 10:40:15.619283    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:15.619283    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:15.619283    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:15.619317    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:15.619317    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:15.619317    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:15 GMT
	I0226 10:40:15.619317    3116 round_trippers.go:580]     Audit-Id: 804e737a-4859-4d3d-985f-93ad26231c5e
	I0226 10:40:15.619612    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:16.107633    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:16.107633    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:16.107633    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:16.107750    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:16.115585    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:16.115585    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:16.115585    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:16.115585    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:16.115585    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:16 GMT
	I0226 10:40:16.115585    3116 round_trippers.go:580]     Audit-Id: dc56d0b8-7de7-4de1-9885-5180669f1817
	I0226 10:40:16.115585    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:16.115585    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:16.115585    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:16.116374    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:16.116374    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:16.116374    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:16.116374    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:16.123867    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:16.123867    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:16.123867    3116 round_trippers.go:580]     Audit-Id: 6f537f9e-ab8b-45c9-945b-2bc879ab0cdc
	I0226 10:40:16.123867    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:16.123867    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:16.123867    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:16.123867    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:16.123867    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:16 GMT
	I0226 10:40:16.123867    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:16.605949    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:16.606034    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:16.606034    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:16.606034    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:16.613472    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:16.613511    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:16.613511    3116 round_trippers.go:580]     Audit-Id: 1cdf993e-5fee-48d9-b783-b78e960021a3
	I0226 10:40:16.613603    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:16.613603    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:16.613603    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:16.613603    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:16.613603    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:16 GMT
	I0226 10:40:16.613603    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:16.614492    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:16.614577    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:16.614577    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:16.614577    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:16.621091    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:16.621091    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:16.621091    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:16.621091    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:16.621091    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:16 GMT
	I0226 10:40:16.621091    3116 round_trippers.go:580]     Audit-Id: d85c6315-57d7-4639-83f0-646750764165
	I0226 10:40:16.621091    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:16.621091    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:16.625076    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:17.092406    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:17.092406    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:17.092406    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:17.092482    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:17.098458    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:17.099000    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:17.099000    3116 round_trippers.go:580]     Audit-Id: bf13ad87-bd26-4d45-b80f-7c2f0c0f5b91
	I0226 10:40:17.099000    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:17.099000    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:17.099000    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:17.099066    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:17.099116    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:17 GMT
	I0226 10:40:17.099116    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:17.100025    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:17.100076    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:17.100076    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:17.100076    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:17.106890    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:17.106890    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:17.106890    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:17.106890    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:17 GMT
	I0226 10:40:17.106890    3116 round_trippers.go:580]     Audit-Id: 3527e225-ae76-47bb-84a3-fdb0da6cf1a8
	I0226 10:40:17.106890    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:17.106890    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:17.106890    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:17.106890    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:17.107569    3116 pod_ready.go:102] pod "etcd-functional-366900" in "kube-system" namespace has status "Ready":"False"
	I0226 10:40:17.593426    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:17.593426    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:17.593504    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:17.593504    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:17.604572    3116 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0226 10:40:17.604572    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:17.604572    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:17.604572    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:17.604572    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:17.604572    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:17.604572    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:17 GMT
	I0226 10:40:17.604572    3116 round_trippers.go:580]     Audit-Id: 4a0dc03c-d65b-46d7-8006-261d7bda265c
	I0226 10:40:17.604572    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:17.606007    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:17.606007    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:17.606007    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:17.606007    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:17.613427    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:17.613427    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:17.613427    3116 round_trippers.go:580]     Audit-Id: df274b65-2c42-46a2-b903-4a7bd564aeb5
	I0226 10:40:17.613427    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:17.613427    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:17.613427    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:17.613427    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:17.613427    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:17 GMT
	I0226 10:40:17.613968    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:18.092967    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:18.092967    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:18.092967    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:18.092967    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:18.099404    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:18.099404    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:18.099404    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:18.099404    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:18.099404    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:18 GMT
	I0226 10:40:18.099541    3116 round_trippers.go:580]     Audit-Id: 8b687c22-ac6b-4221-9455-e008471cf9f1
	I0226 10:40:18.099541    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:18.099541    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:18.099757    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:18.100433    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:18.100433    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:18.100433    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:18.100433    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:18.107529    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:18.107529    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:18.107529    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:18.107529    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:18.107529    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:18.107529    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:18 GMT
	I0226 10:40:18.107529    3116 round_trippers.go:580]     Audit-Id: bd69958b-d29e-4157-b7e3-ea74391dad35
	I0226 10:40:18.107529    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:18.108433    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:18.592946    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:18.592946    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:18.592946    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:18.592946    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:18.599336    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:18.599908    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:18.599908    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:18.599968    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:18.599968    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:18.599968    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:18.599968    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:18 GMT
	I0226 10:40:18.599968    3116 round_trippers.go:580]     Audit-Id: 33dd6a99-d744-41c3-a509-92edfd5e0f61
	I0226 10:40:18.599968    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:18.600759    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:18.600799    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:18.600799    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:18.600799    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:18.608275    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:18.608275    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:18.608275    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:18.608275    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:18 GMT
	I0226 10:40:18.608275    3116 round_trippers.go:580]     Audit-Id: 76fd8ed9-3aa4-45f6-a1cd-b8b42a20d866
	I0226 10:40:18.608275    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:18.608275    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:18.608275    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:18.608275    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:19.097466    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:19.097466    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:19.097466    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:19.097903    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:19.145831    3116 round_trippers.go:574] Response Status: 200 OK in 47 milliseconds
	I0226 10:40:19.145935    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:19.145935    3116 round_trippers.go:580]     Audit-Id: 76149bf1-b313-4bcb-a44a-52cc6d5f265f
	I0226 10:40:19.145935    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:19.145935    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:19.145935    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:19.145935    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:19.145935    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:19 GMT
	I0226 10:40:19.145935    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"436","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I0226 10:40:19.146754    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:19.146805    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:19.146805    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:19.146805    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:19.154706    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:19.154986    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:19.154986    3116 round_trippers.go:580]     Audit-Id: 583d19bf-d709-4b19-87a9-f705e697fe5b
	I0226 10:40:19.155055    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:19.155055    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:19.155055    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:19.155122    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:19.155122    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:19 GMT
	I0226 10:40:19.155475    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:19.155615    3116 pod_ready.go:102] pod "etcd-functional-366900" in "kube-system" namespace has status "Ready":"False"
	I0226 10:40:19.601349    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/etcd-functional-366900
	I0226 10:40:19.601430    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:19.601430    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:19.601430    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:19.609251    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:19.609251    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:19.609251    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:19 GMT
	I0226 10:40:19.609251    3116 round_trippers.go:580]     Audit-Id: 48205425-d420-4807-8ca1-b7a219bd6ce1
	I0226 10:40:19.609251    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:19.609251    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:19.609251    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:19.609251    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:19.609953    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-366900","namespace":"kube-system","uid":"a927a668-2a96-436d-9eae-c0e5178b026d","resourceVersion":"517","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.mirror":"981e8281e5371a424725e53e791052d6","kubernetes.io/config.seen":"2024-02-26T10:39:01.785836802Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5857 chars]
	I0226 10:40:19.610368    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:19.610368    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:19.610368    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:19.610368    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:19.618083    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:19.618103    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:19.618103    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:19.618103    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:19.618103    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:19.618103    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:19 GMT
	I0226 10:40:19.618167    3116 round_trippers.go:580]     Audit-Id: 9d178582-1b28-451b-a7e1-7eae4b62be65
	I0226 10:40:19.618190    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:19.618218    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:19.618897    3116 pod_ready.go:92] pod "etcd-functional-366900" in "kube-system" namespace has status "Ready":"True"
	I0226 10:40:19.618926    3116 pod_ready.go:81] duration metric: took 4.5276719s waiting for pod "etcd-functional-366900" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:19.618926    3116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-366900" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:19.618926    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-366900
	I0226 10:40:19.618926    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:19.618926    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:19.618926    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:19.624410    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:19.624410    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:19.625059    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:19.625059    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:19.625059    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:19.625059    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:19.625059    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:19 GMT
	I0226 10:40:19.625059    3116 round_trippers.go:580]     Audit-Id: 795b90a6-86a5-4e78-92e7-e17732860099
	I0226 10:40:19.625260    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-366900","namespace":"kube-system","uid":"e1d69c97-977d-4891-9f40-de9843e731c8","resourceVersion":"506","creationTimestamp":"2024-02-26T10:38:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"08db092730c2aa8c12611f56fef07b02","kubernetes.io/config.mirror":"08db092730c2aa8c12611f56fef07b02","kubernetes.io/config.seen":"2024-02-26T10:38:52.160013561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8448 chars]
	I0226 10:40:19.626055    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:19.626139    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:19.626139    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:19.626139    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:19.630924    3116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0226 10:40:19.630924    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:19.630924    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:19.630924    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:19.630924    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:19.630924    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:19.630924    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:19 GMT
	I0226 10:40:19.630924    3116 round_trippers.go:580]     Audit-Id: dd189d22-049a-4130-b8ec-0bb98c2d2c73
	I0226 10:40:19.630924    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:19.630924    3116 pod_ready.go:92] pod "kube-apiserver-functional-366900" in "kube-system" namespace has status "Ready":"True"
	I0226 10:40:19.630924    3116 pod_ready.go:81] duration metric: took 11.9985ms waiting for pod "kube-apiserver-functional-366900" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:19.630924    3116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-366900" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:19.630924    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-366900
	I0226 10:40:19.630924    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:19.630924    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:19.630924    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:19.638176    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:19.638200    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:19.638200    3116 round_trippers.go:580]     Audit-Id: f2fa4beb-ed3c-4565-a6b8-347839c9b133
	I0226 10:40:19.638200    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:19.638200    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:19.638200    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:19.638200    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:19.638200    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:19 GMT
	I0226 10:40:19.638875    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-366900","namespace":"kube-system","uid":"959853ef-9603-48a6-ab33-8d3b94ec6c8e","resourceVersion":"437","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46c8a7811ef1655878322390d2a81c7c","kubernetes.io/config.mirror":"46c8a7811ef1655878322390d2a81c7c","kubernetes.io/config.seen":"2024-02-26T10:39:01.785841703Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8283 chars]
	I0226 10:40:19.639036    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:19.639036    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:19.639036    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:19.639036    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:19.644955    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:19.644955    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:19.644955    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:19 GMT
	I0226 10:40:19.644955    3116 round_trippers.go:580]     Audit-Id: 0c017941-2c6b-4155-b2cd-1b2680358567
	I0226 10:40:19.644955    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:19.644955    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:19.644955    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:19.644955    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:19.646347    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:20.132267    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-366900
	I0226 10:40:20.132335    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:20.132335    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:20.132335    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:20.139818    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:20.139818    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:20.139942    3116 round_trippers.go:580]     Audit-Id: 3a894ffc-6180-4ba1-bcfe-f612c2fd93ad
	I0226 10:40:20.139942    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:20.139942    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:20.139942    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:20.139942    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:20.139942    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:20 GMT
	I0226 10:40:20.140175    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-366900","namespace":"kube-system","uid":"959853ef-9603-48a6-ab33-8d3b94ec6c8e","resourceVersion":"437","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46c8a7811ef1655878322390d2a81c7c","kubernetes.io/config.mirror":"46c8a7811ef1655878322390d2a81c7c","kubernetes.io/config.seen":"2024-02-26T10:39:01.785841703Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8283 chars]
	I0226 10:40:20.140768    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:20.140887    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:20.140887    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:20.140887    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:20.147742    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:20.147805    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:20.147805    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:20.147805    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:20 GMT
	I0226 10:40:20.147805    3116 round_trippers.go:580]     Audit-Id: 0bceb27b-85b4-4255-8f3a-e8054a6d9479
	I0226 10:40:20.147805    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:20.147805    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:20.147910    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:20.148090    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:20.632033    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-366900
	I0226 10:40:20.632033    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:20.632109    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:20.632109    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:20.638793    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:20.639178    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:20.639516    3116 round_trippers.go:580]     Audit-Id: 6b751c7a-8700-48ec-b1fb-9fb2e8c1bdc4
	I0226 10:40:20.639641    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:20.639641    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:20.639641    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:20.639641    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:20.639641    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:20 GMT
	I0226 10:40:20.639641    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-366900","namespace":"kube-system","uid":"959853ef-9603-48a6-ab33-8d3b94ec6c8e","resourceVersion":"437","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46c8a7811ef1655878322390d2a81c7c","kubernetes.io/config.mirror":"46c8a7811ef1655878322390d2a81c7c","kubernetes.io/config.seen":"2024-02-26T10:39:01.785841703Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8283 chars]
	I0226 10:40:20.640345    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:20.641004    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:20.641004    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:20.641004    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:20.647952    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:20.648174    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:20.648192    3116 round_trippers.go:580]     Audit-Id: ddd25549-5d21-48cc-af83-cadb3b6ca26b
	I0226 10:40:20.648231    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:20.648231    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:20.648231    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:20.648231    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:20.648231    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:20 GMT
	I0226 10:40:20.648231    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:21.131410    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-366900
	I0226 10:40:21.131603    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.131603    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.131674    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.137957    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:21.138013    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.138013    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.138013    3116 round_trippers.go:580]     Audit-Id: 7bdd7fc6-1976-4f92-acc6-a2409b7deebd
	I0226 10:40:21.138013    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.138013    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.138013    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.138013    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.138352    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-366900","namespace":"kube-system","uid":"959853ef-9603-48a6-ab33-8d3b94ec6c8e","resourceVersion":"520","creationTimestamp":"2024-02-26T10:39:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46c8a7811ef1655878322390d2a81c7c","kubernetes.io/config.mirror":"46c8a7811ef1655878322390d2a81c7c","kubernetes.io/config.seen":"2024-02-26T10:39:01.785841703Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8021 chars]
	I0226 10:40:21.139072    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:21.139144    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.139144    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.139181    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.146288    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:21.146288    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.146288    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.146288    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.146288    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.146288    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.146288    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.146288    3116 round_trippers.go:580]     Audit-Id: 9253cb14-ed6f-43ea-a0cf-e0110735ea08
	I0226 10:40:21.146288    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:21.146288    3116 pod_ready.go:92] pod "kube-controller-manager-functional-366900" in "kube-system" namespace has status "Ready":"True"
	I0226 10:40:21.146288    3116 pod_ready.go:81] duration metric: took 1.5153603s waiting for pod "kube-controller-manager-functional-366900" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:21.146288    3116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k75mq" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:21.146288    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/kube-proxy-k75mq
	I0226 10:40:21.146288    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.146288    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.146288    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.152448    3116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0226 10:40:21.152448    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.152448    3116 round_trippers.go:580]     Audit-Id: 330bff7e-09b1-47b3-a5cd-79b7fc4947bd
	I0226 10:40:21.152448    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.152448    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.152448    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.152448    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.152448    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.153401    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k75mq","generateName":"kube-proxy-","namespace":"kube-system","uid":"aa4b9ca9-e541-47ee-8fe3-31fb2382a212","resourceVersion":"439","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2280040f-b8d0-41d2-9bf6-e7612d09d95e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2280040f-b8d0-41d2-9bf6-e7612d09d95e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5531 chars]
	I0226 10:40:21.153401    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:21.153401    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.153401    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.153401    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.160616    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:21.160616    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.160616    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.160616    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.160616    3116 round_trippers.go:580]     Audit-Id: 2bca4a7b-49a7-4fb9-8553-041f286ec26d
	I0226 10:40:21.160616    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.160616    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.160616    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.161157    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:21.161352    3116 pod_ready.go:92] pod "kube-proxy-k75mq" in "kube-system" namespace has status "Ready":"True"
	I0226 10:40:21.161352    3116 pod_ready.go:81] duration metric: took 15.0632ms waiting for pod "kube-proxy-k75mq" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:21.161352    3116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-366900" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:21.161352    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-366900
	I0226 10:40:21.161352    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.161352    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.161352    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.166478    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:21.166478    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.166478    3116 round_trippers.go:580]     Audit-Id: 49e3b95c-25a8-441a-bdc3-2b82488f1bb4
	I0226 10:40:21.166478    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.166478    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.166478    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.166478    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.166478    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.167379    3116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-366900","namespace":"kube-system","uid":"8f484dd7-9b37-430b-b66e-845e280c54ff","resourceVersion":"510","creationTimestamp":"2024-02-26T10:38:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"af8300354704da8bdab1ef7c0b0fc20c","kubernetes.io/config.mirror":"af8300354704da8bdab1ef7c0b0fc20c","kubernetes.io/config.seen":"2024-02-26T10:38:52.160007761Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:38:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4903 chars]
	I0226 10:40:21.167514    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes/functional-366900
	I0226 10:40:21.167514    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.167514    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.167514    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.173366    3116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0226 10:40:21.173814    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.173814    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.173814    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.173814    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.173814    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.173814    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.173814    3116 round_trippers.go:580]     Audit-Id: 139412b7-19b2-413f-bc23-61b78f4ae6b4
	I0226 10:40:21.173814    3116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-26T10:38:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0226 10:40:21.174435    3116 pod_ready.go:92] pod "kube-scheduler-functional-366900" in "kube-system" namespace has status "Ready":"True"
	I0226 10:40:21.174435    3116 pod_ready.go:81] duration metric: took 13.0833ms waiting for pod "kube-scheduler-functional-366900" in "kube-system" namespace to be "Ready" ...
	I0226 10:40:21.174435    3116 pod_ready.go:38] duration metric: took 13.1378407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 10:40:21.174435    3116 api_server.go:52] waiting for apiserver process to appear ...
	I0226 10:40:21.188933    3116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 10:40:21.212615    3116 command_runner.go:130] > 6036
	I0226 10:40:21.214574    3116 api_server.go:72] duration metric: took 13.4749374s to wait for apiserver process to appear ...
	I0226 10:40:21.214574    3116 api_server.go:88] waiting for apiserver healthz status ...
	I0226 10:40:21.214684    3116 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51489/healthz ...
	I0226 10:40:21.227273    3116 api_server.go:279] https://127.0.0.1:51489/healthz returned 200:
	ok
	I0226 10:40:21.227273    3116 round_trippers.go:463] GET https://127.0.0.1:51489/version
	I0226 10:40:21.227273    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.227273    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.227273    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.230521    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0226 10:40:21.230521    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.230521    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.230521    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.230521    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.230521    3116 round_trippers.go:580]     Content-Length: 264
	I0226 10:40:21.230521    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.230521    3116 round_trippers.go:580]     Audit-Id: c422392b-9157-4685-bace-da79b7a51897
	I0226 10:40:21.230521    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.230521    3116 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0226 10:40:21.231561    3116 api_server.go:141] control plane version: v1.28.4
	I0226 10:40:21.231561    3116 api_server.go:131] duration metric: took 16.9866ms to wait for apiserver health ...
	I0226 10:40:21.231561    3116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 10:40:21.231786    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods
	I0226 10:40:21.231876    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.231876    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.231876    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.240495    3116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0226 10:40:21.240838    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.240838    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.240897    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.240897    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.240897    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.240897    3116 round_trippers.go:580]     Audit-Id: 5f621649-379d-4f4c-8156-c9e602aff14c
	I0226 10:40:21.240897    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.243304    3116 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"504","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49068 chars]
	I0226 10:40:21.245691    3116 system_pods.go:59] 7 kube-system pods found
	I0226 10:40:21.245756    3116 system_pods.go:61] "coredns-5dd5756b68-xnmfr" [748f3faf-1bf6-4894-aa38-e45189b52880] Running
	I0226 10:40:21.245756    3116 system_pods.go:61] "etcd-functional-366900" [a927a668-2a96-436d-9eae-c0e5178b026d] Running
	I0226 10:40:21.245756    3116 system_pods.go:61] "kube-apiserver-functional-366900" [e1d69c97-977d-4891-9f40-de9843e731c8] Running
	I0226 10:40:21.245756    3116 system_pods.go:61] "kube-controller-manager-functional-366900" [959853ef-9603-48a6-ab33-8d3b94ec6c8e] Running
	I0226 10:40:21.245756    3116 system_pods.go:61] "kube-proxy-k75mq" [aa4b9ca9-e541-47ee-8fe3-31fb2382a212] Running
	I0226 10:40:21.245857    3116 system_pods.go:61] "kube-scheduler-functional-366900" [8f484dd7-9b37-430b-b66e-845e280c54ff] Running
	I0226 10:40:21.245857    3116 system_pods.go:61] "storage-provisioner" [8af976fb-796a-4d3b-a3db-c54011d75859] Running
	I0226 10:40:21.245857    3116 system_pods.go:74] duration metric: took 14.2957ms to wait for pod list to return data ...
	I0226 10:40:21.245899    3116 default_sa.go:34] waiting for default service account to be created ...
	I0226 10:40:21.245995    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/default/serviceaccounts
	I0226 10:40:21.245995    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.246070    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.246070    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.251011    3116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0226 10:40:21.251011    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.251011    3116 round_trippers.go:580]     Audit-Id: e51024df-898d-453c-895a-be4e8aba9cc5
	I0226 10:40:21.251011    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.251011    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.251011    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.251011    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.251011    3116 round_trippers.go:580]     Content-Length: 261
	I0226 10:40:21.251011    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.251011    3116 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6d0c7760-eb37-49fb-a6af-6e1b64d74c64","resourceVersion":"327","creationTimestamp":"2024-02-26T10:39:14Z"}}]}
	I0226 10:40:21.252622    3116 default_sa.go:45] found service account: "default"
	I0226 10:40:21.252694    3116 default_sa.go:55] duration metric: took 6.6993ms for default service account to be created ...
	I0226 10:40:21.252694    3116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0226 10:40:21.411870    3116 request.go:629] Waited for 159.1754ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods
	I0226 10:40:21.412109    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/namespaces/kube-system/pods
	I0226 10:40:21.412109    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.412109    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.412109    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.420095    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:21.420150    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.420150    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.420150    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.420150    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.420150    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.420150    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.420150    3116 round_trippers.go:580]     Audit-Id: 86537fcf-abbd-40dd-823b-6eabad749a6e
	I0226 10:40:21.421024    3116 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xnmfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"748f3faf-1bf6-4894-aa38-e45189b52880","resourceVersion":"504","creationTimestamp":"2024-02-26T10:39:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3e83e730-38cf-4257-b763-4d257f8bb686","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T10:39:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e83e730-38cf-4257-b763-4d257f8bb686\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49068 chars]
	I0226 10:40:21.423481    3116 system_pods.go:86] 7 kube-system pods found
	I0226 10:40:21.423558    3116 system_pods.go:89] "coredns-5dd5756b68-xnmfr" [748f3faf-1bf6-4894-aa38-e45189b52880] Running
	I0226 10:40:21.423558    3116 system_pods.go:89] "etcd-functional-366900" [a927a668-2a96-436d-9eae-c0e5178b026d] Running
	I0226 10:40:21.423558    3116 system_pods.go:89] "kube-apiserver-functional-366900" [e1d69c97-977d-4891-9f40-de9843e731c8] Running
	I0226 10:40:21.423558    3116 system_pods.go:89] "kube-controller-manager-functional-366900" [959853ef-9603-48a6-ab33-8d3b94ec6c8e] Running
	I0226 10:40:21.423558    3116 system_pods.go:89] "kube-proxy-k75mq" [aa4b9ca9-e541-47ee-8fe3-31fb2382a212] Running
	I0226 10:40:21.423558    3116 system_pods.go:89] "kube-scheduler-functional-366900" [8f484dd7-9b37-430b-b66e-845e280c54ff] Running
	I0226 10:40:21.423558    3116 system_pods.go:89] "storage-provisioner" [8af976fb-796a-4d3b-a3db-c54011d75859] Running
	I0226 10:40:21.423558    3116 system_pods.go:126] duration metric: took 170.8631ms to wait for k8s-apps to be running ...
	I0226 10:40:21.423558    3116 system_svc.go:44] waiting for kubelet service to be running ....
	I0226 10:40:21.433741    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 10:40:21.460017    3116 system_svc.go:56] duration metric: took 36.4053ms WaitForService to wait for kubelet.
	I0226 10:40:21.460091    3116 kubeadm.go:581] duration metric: took 13.720379s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0226 10:40:21.460163    3116 node_conditions.go:102] verifying NodePressure condition ...
	I0226 10:40:21.614127    3116 request.go:629] Waited for 153.6623ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51489/api/v1/nodes
	I0226 10:40:21.614127    3116 round_trippers.go:463] GET https://127.0.0.1:51489/api/v1/nodes
	I0226 10:40:21.614407    3116 round_trippers.go:469] Request Headers:
	I0226 10:40:21.614407    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0226 10:40:21.614407    3116 round_trippers.go:473]     Accept: application/json, */*
	I0226 10:40:21.622136    3116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0226 10:40:21.622136    3116 round_trippers.go:577] Response Headers:
	I0226 10:40:21.622136    3116 round_trippers.go:580]     Audit-Id: 879760eb-23cc-45d2-8ad8-1b65248045d1
	I0226 10:40:21.622136    3116 round_trippers.go:580]     Cache-Control: no-cache, private
	I0226 10:40:21.622136    3116 round_trippers.go:580]     Content-Type: application/json
	I0226 10:40:21.622136    3116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9af31220-5d51-41ed-bbf4-f5fd0b4a42f3
	I0226 10:40:21.622136    3116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2664f2e4-c375-421d-88d4-6ebd6405154d
	I0226 10:40:21.622136    3116 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:40:21 GMT
	I0226 10:40:21.622665    3116 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"functional-366900","uid":"f3c8a78d-59cd-4033-b830-29107d470d37","resourceVersion":"396","creationTimestamp":"2024-02-26T10:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-366900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4011915ad0e9b27ff42994854397cc2ed93516c6","minikube.k8s.io/name":"functional-366900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_26T10_39_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4907 chars]
	I0226 10:40:21.623274    3116 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0226 10:40:21.623460    3116 node_conditions.go:123] node cpu capacity is 16
	I0226 10:40:21.623555    3116 node_conditions.go:105] duration metric: took 163.3912ms to run NodePressure ...
	I0226 10:40:21.623582    3116 start.go:228] waiting for startup goroutines ...
	I0226 10:40:21.623582    3116 start.go:233] waiting for cluster config update ...
	I0226 10:40:21.623582    3116 start.go:242] writing updated cluster config ...
	I0226 10:40:21.636327    3116 ssh_runner.go:195] Run: rm -f paused
	I0226 10:40:21.772389    3116 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0226 10:40:21.775665    3116 out.go:177] * Done! kubectl is now configured to use "functional-366900" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 26 10:39:55 functional-366900 systemd[1]: cri-docker.service: Deactivated successfully.
	Feb 26 10:39:55 functional-366900 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Feb 26 10:39:55 functional-366900 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Feb 26 10:39:55 functional-366900 systemd[1]: cri-docker.service: Deactivated successfully.
	Feb 26 10:39:55 functional-366900 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Feb 26 10:39:55 functional-366900 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Start docker client with request timeout 0s"
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Loaded network plugin cni"
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Docker cri networking managed by network plugin cni"
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Docker Info: &{ID:d4e89ea2-37b6-444f-a362-b6ba71e1c07c Containers:15 ContainersRunning:0 ContainersPaused:0 ContainersStopped:15 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2024-02-26T10:39:56.079924777Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingS
ystem:Ubuntu 22.04.3 LTS (containerized) OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00042c9a0 NCPU:16 MemTotal:33657511936 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:functional-366900 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] Pro
ductLicense: DefaultAddressPools:[] Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support]}"
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 26 10:39:56 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:39:56Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 26 10:39:56 functional-366900 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 26 10:40:00 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4600e7ac99826f55894e46503bcc15ac0b33ddac03213181276718d5cc110ee/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 10:40:00 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/44ca11266f3ff094cd04b3467773a9448c1bb8ce0e9472700579993650fb6f7a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 10:40:00 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f6c6044a96860561a05898292684b74f2ba358d129d1bc8f5637c8c814bcede0/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 10:40:00 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/93b99f45fa64436c30936add5689ebc066a2fe6af2718f93fee1a0a168752919/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 10:40:00 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee7bb7a3c3c6beefbc8b1249830fb839fc00d54d92a663970d95501cf35b43f4/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 10:40:00 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a3ee952be61f3c1e2f8523181890b73fa7538de16716fe77af77025799ded95/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 10:40:00 functional-366900 cri-dockerd[5221]: time="2024-02-26T10:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/431eb99f955c43acdbe17b5b6b254c76f34e2aeb22a61ed194466ad5a8f6d529/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 10:40:02 functional-366900 dockerd[4959]: time="2024-02-26T10:40:02.434854414Z" level=info msg="ignoring event" container=f7a008aa2ee17189433dec77a253c5cf712baeebaa71f22298bafda8d629d66c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	223ae12705e89       6e38f40d628db       25 seconds ago       Running             storage-provisioner       2                   5a3ee952be61f       storage-provisioner
	a5e2085b2d3a0       73deb9a3f7025       43 seconds ago       Running             etcd                      1                   431eb99f955c4       etcd-functional-366900
	9d718b1ebc934       ead0a4a53df89       43 seconds ago       Running             coredns                   1                   ee7bb7a3c3c6b       coredns-5dd5756b68-xnmfr
	f7a008aa2ee17       6e38f40d628db       43 seconds ago       Exited              storage-provisioner       1                   5a3ee952be61f       storage-provisioner
	936e2c1a43a6e       83f6cc407eed8       43 seconds ago       Running             kube-proxy                1                   93b99f45fa644       kube-proxy-k75mq
	f473f580abbe2       7fe0e6f37db33       43 seconds ago       Running             kube-apiserver            1                   f6c6044a96860       kube-apiserver-functional-366900
	985fd863e1554       e3db313c6dbc0       43 seconds ago       Running             kube-scheduler            1                   44ca11266f3ff       kube-scheduler-functional-366900
	35e43925db517       d058aa5ab969c       43 seconds ago       Running             kube-controller-manager   1                   c4600e7ac9982       kube-controller-manager-functional-366900
	c44031f477284       d058aa5ab969c       About a minute ago   Exited              kube-controller-manager   0                   e879a82dc44cf       kube-controller-manager-functional-366900
	3e121b13706a3       e3db313c6dbc0       About a minute ago   Exited              kube-scheduler            0                   bcf2ec83326db       kube-scheduler-functional-366900
	038466cae4021       7fe0e6f37db33       About a minute ago   Exited              kube-apiserver            0                   1acbec1657585       kube-apiserver-functional-366900
	
	
	==> coredns [9d718b1ebc93] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36396 - 8230 "HINFO IN 1783294412921238571.8834197135380501948. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.115139934s
	
	
	==> describe nodes <==
	Name:               functional-366900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-366900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6
	                    minikube.k8s.io/name=functional-366900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_26T10_39_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Feb 2024 10:38:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-366900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Feb 2024 10:40:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Feb 2024 10:40:36 +0000   Mon, 26 Feb 2024 10:38:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Feb 2024 10:40:36 +0000   Mon, 26 Feb 2024 10:38:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Feb 2024 10:40:36 +0000   Mon, 26 Feb 2024 10:38:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Feb 2024 10:40:36 +0000   Mon, 26 Feb 2024 10:39:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-366900
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868664Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868664Ki
	  pods:               110
	System Info:
	  Machine ID:                 716bc2e466f54985b0ed80c160b632f2
	  System UUID:                716bc2e466f54985b0ed80c160b632f2
	  Boot ID:                    cfe72d5e-3bc4-4cbf-8f9a-0bb1f1ad831b
	  Kernel Version:             5.15.133.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-xnmfr                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     90s
	  kube-system                 etcd-functional-366900                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kube-apiserver-functional-366900             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-functional-366900    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-k75mq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-functional-366900             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 85s   kube-proxy       
	  Normal  Starting                 37s   kube-proxy       
	  Normal  Starting                 103s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s  kubelet          Node functional-366900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s  kubelet          Node functional-366900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s  kubelet          Node functional-366900 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             102s  kubelet          Node functional-366900 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  102s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                102s  kubelet          Node functional-366900 status is now: NodeReady
	  Normal  RegisteredNode           91s   node-controller  Node functional-366900 event: Registered Node functional-366900 in Controller
	  Normal  RegisteredNode           25s   node-controller  Node functional-366900 event: Registered Node functional-366900 in Controller
	
	
	==> dmesg <==
	
	[  +0.003243] WSL (1) ERROR: ConfigMountFsTab:2579: Processing fstab with mount -a failed.
	[  +0.003490] WSL (1) ERROR: ConfigApplyWindowsLibPath:2527: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.005182] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002157] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.007006] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002323] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.010573] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.235644] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.092576] FS-Cache: Duplicate cookie detected
	[  +0.001007] FS-Cache: O-cookie c=00000015 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001310] FS-Cache: O-cookie d=00000000d569eb1c{9P.session} n=000000008acff7ee
	[  +0.001514] FS-Cache: O-key=[10] '34323934393338313935'
	[  +0.000976] FS-Cache: N-cookie c=00000016 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001232] FS-Cache: N-cookie d=00000000d569eb1c{9P.session} n=000000003d68284f
	[  +0.001380] FS-Cache: N-key=[10] '34323934393338313935'
	[  +0.024586] WSL (1) ERROR: ConfigApplyWindowsLibPath:2527: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000004]  failed 2
	[  +0.030277] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.147949] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.637490] netlink: 'init': attribute type 4 has an invalid length.
	[  +0.727829] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [a5e2085b2d3a] <==
	{"level":"info","ts":"2024-02-26T10:40:02.834696Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-26T10:40:02.83562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-26T10:40:02.835738Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-26T10:40:02.836577Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T10:40:02.83671Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T10:40:02.839597Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-26T10:40:02.839978Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-26T10:40:02.840059Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-26T10:40:02.840048Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-26T10:40:02.840199Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-26T10:40:04.144283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-26T10:40:04.144413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-26T10:40:04.144475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-26T10:40:04.144493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-02-26T10:40:04.144501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-26T10:40:04.144512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-02-26T10:40:04.144521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-26T10:40:04.149495Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-366900 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T10:40:04.14949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T10:40:04.14953Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T10:40:04.150904Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T10:40:04.150994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-26T10:40:04.152217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-26T10:40:04.152329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T10:40:06.687447Z","caller":"traceutil/trace.go:171","msg":"trace[1533579383] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"138.597878ms","start":"2024-02-26T10:40:06.54883Z","end":"2024-02-26T10:40:06.687428Z","steps":["trace[1533579383] 'process raft request'  (duration: 100.147834ms)","trace[1533579383] 'compare'  (duration: 38.061846ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:40:44 up 21 min,  0 users,  load average: 1.62, 2.05, 1.50
	Linux functional-366900 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [038466cae402] <==
	W0226 10:39:52.427530       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.487347       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.499356       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.509992       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.587545       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.603839       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.643138       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.645789       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.680938       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.707608       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.719306       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.723281       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.734969       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.737793       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.787407       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.807494       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.856393       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.882510       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.981396       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:52.993602       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:53.070707       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:53.098575       1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:53.173651       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:53.208242       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 10:39:53.242105       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f473f580abbe] <==
	I0226 10:40:06.201004       1 controller.go:134] Starting OpenAPI controller
	I0226 10:40:06.201035       1 controller.go:85] Starting OpenAPI V3 controller
	I0226 10:40:06.198740       1 controller.go:116] Starting legacy_token_tracking_controller
	I0226 10:40:06.201069       1 naming_controller.go:291] Starting NamingConditionController
	I0226 10:40:06.201092       1 establishing_controller.go:76] Starting EstablishingController
	I0226 10:40:06.201111       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0226 10:40:06.201068       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0226 10:40:06.201817       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0226 10:40:06.201841       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0226 10:40:06.436732       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0226 10:40:06.436973       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0226 10:40:06.439114       1 aggregator.go:166] initial CRD sync complete...
	I0226 10:40:06.439810       1 autoregister_controller.go:141] Starting autoregister controller
	I0226 10:40:06.440469       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0226 10:40:06.440488       1 cache.go:39] Caches are synced for autoregister controller
	I0226 10:40:06.439703       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0226 10:40:06.450166       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0226 10:40:06.532511       1 shared_informer.go:318] Caches are synced for configmaps
	I0226 10:40:06.532723       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0226 10:40:06.532802       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0226 10:40:06.532821       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0226 10:40:06.632223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0226 10:40:07.205738       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0226 10:40:19.153963       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0226 10:40:19.262123       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [35e43925db51] <==
	I0226 10:40:19.052315       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0226 10:40:19.052461       1 shared_informer.go:318] Caches are synced for job
	I0226 10:40:19.055415       1 shared_informer.go:318] Caches are synced for attach detach
	I0226 10:40:19.055888       1 shared_informer.go:318] Caches are synced for expand
	I0226 10:40:19.055964       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0226 10:40:19.057894       1 shared_informer.go:318] Caches are synced for PVC protection
	I0226 10:40:19.058927       1 shared_informer.go:318] Caches are synced for node
	I0226 10:40:19.058984       1 range_allocator.go:174] "Sending events to api server"
	I0226 10:40:19.059026       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0226 10:40:19.059030       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0226 10:40:19.059035       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0226 10:40:19.062121       1 shared_informer.go:318] Caches are synced for service account
	I0226 10:40:19.062290       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0226 10:40:19.065864       1 shared_informer.go:318] Caches are synced for persistent volume
	I0226 10:40:19.066416       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0226 10:40:19.066518       1 shared_informer.go:318] Caches are synced for crt configmap
	I0226 10:40:19.132612       1 shared_informer.go:318] Caches are synced for namespace
	I0226 10:40:19.132803       1 shared_informer.go:318] Caches are synced for cronjob
	I0226 10:40:19.165438       1 shared_informer.go:318] Caches are synced for resource quota
	I0226 10:40:19.249161       1 shared_informer.go:318] Caches are synced for endpoint
	I0226 10:40:19.253055       1 shared_informer.go:318] Caches are synced for resource quota
	I0226 10:40:19.260554       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0226 10:40:19.572416       1 shared_informer.go:318] Caches are synced for garbage collector
	I0226 10:40:19.572598       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0226 10:40:19.646628       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [c44031f47728] <==
	I0226 10:39:13.913097       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0226 10:39:14.139727       1 shared_informer.go:318] Caches are synced for garbage collector
	I0226 10:39:14.152756       1 shared_informer.go:318] Caches are synced for garbage collector
	I0226 10:39:14.152867       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0226 10:39:14.467853       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-k75mq"
	I0226 10:39:14.605964       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-b9275"
	I0226 10:39:14.648829       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xnmfr"
	I0226 10:39:14.674231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="761.335979ms"
	I0226 10:39:14.744791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.432954ms"
	I0226 10:39:14.745286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.802µs"
	I0226 10:39:14.753110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.801µs"
	I0226 10:39:14.849256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="176.203µs"
	I0226 10:39:14.880415       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0226 10:39:14.966349       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-b9275"
	I0226 10:39:14.995549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.610259ms"
	I0226 10:39:15.088261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.468246ms"
	I0226 10:39:15.088560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.303µs"
	I0226 10:39:15.088788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="176.503µs"
	I0226 10:39:19.207210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.903µs"
	I0226 10:39:19.279146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.802µs"
	I0226 10:39:28.634094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.802µs"
	I0226 10:39:29.546494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.802µs"
	I0226 10:39:29.563782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.502µs"
	I0226 10:39:35.088718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.160496ms"
	I0226 10:39:35.088956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.401µs"
	
	
	==> kube-proxy [936e2c1a43a6] <==
	I0226 10:40:02.639959       1 server_others.go:69] "Using iptables proxy"
	I0226 10:40:06.535676       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0226 10:40:06.668703       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0226 10:40:06.672375       1 server_others.go:152] "Using iptables Proxier"
	I0226 10:40:06.672553       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0226 10:40:06.672565       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0226 10:40:06.672590       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0226 10:40:06.673285       1 server.go:846] "Version info" version="v1.28.4"
	I0226 10:40:06.673324       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 10:40:06.674175       1 config.go:188] "Starting service config controller"
	I0226 10:40:06.674676       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0226 10:40:06.675090       1 config.go:315] "Starting node config controller"
	I0226 10:40:06.675262       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0226 10:40:06.733866       1 config.go:97] "Starting endpoint slice config controller"
	I0226 10:40:06.733981       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0226 10:40:06.775903       1 shared_informer.go:318] Caches are synced for node config
	I0226 10:40:06.776052       1 shared_informer.go:318] Caches are synced for service config
	I0226 10:40:06.834234       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3e121b13706a] <==
	W0226 10:38:58.777546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0226 10:38:58.777638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0226 10:38:58.815200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0226 10:38:58.815819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0226 10:38:58.854718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0226 10:38:58.854802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0226 10:38:59.032819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0226 10:38:59.032956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0226 10:38:59.033340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0226 10:38:59.033453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0226 10:38:59.041416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0226 10:38:59.041516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0226 10:38:59.060236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0226 10:38:59.060380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0226 10:38:59.131093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0226 10:38:59.131216       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0226 10:38:59.146918       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0226 10:38:59.147016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0226 10:38:59.340236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0226 10:38:59.340279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0226 10:39:01.548004       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 10:39:43.241766       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0226 10:39:43.242680       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0226 10:39:43.242970       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0226 10:39:43.244710       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [985fd863e155] <==
	I0226 10:40:04.481738       1 serving.go:348] Generated self-signed cert in-memory
	W0226 10:40:06.335590       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0226 10:40:06.335634       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0226 10:40:06.335650       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0226 10:40:06.335719       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0226 10:40:06.537879       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0226 10:40:06.537952       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 10:40:06.543596       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0226 10:40:06.543952       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 10:40:06.544829       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0226 10:40:06.545019       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0226 10:40:06.644961       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 26 10:40:00 functional-366900 kubelet[2672]: I0226 10:40:00.871441    2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44ca11266f3ff094cd04b3467773a9448c1bb8ce0e9472700579993650fb6f7a"
	Feb 26 10:40:00 functional-366900 kubelet[2672]: I0226 10:40:00.888279    2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93b99f45fa64436c30936add5689ebc066a2fe6af2718f93fee1a0a168752919"
	Feb 26 10:40:00 functional-366900 kubelet[2672]: I0226 10:40:00.908161    2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7bb7a3c3c6beefbc8b1249830fb839fc00d54d92a663970d95501cf35b43f4"
	Feb 26 10:40:00 functional-366900 kubelet[2672]: I0226 10:40:00.944147    2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3ee952be61f3c1e2f8523181890b73fa7538de16716fe77af77025799ded95"
	Feb 26 10:40:00 functional-366900 kubelet[2672]: I0226 10:40:00.958530    2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4600e7ac99826f55894e46503bcc15ac0b33ddac03213181276718d5cc110ee"
	Feb 26 10:40:01 functional-366900 kubelet[2672]: E0226 10:40:01.433480    2672 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-366900.17b7637f9625cf52", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-366900", UID:"08db092730c2aa8c12611f56fef07b02", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.Event
Source{Component:"kubelet", Host:"functional-366900"}, FirstTimestamp:time.Date(2024, time.February, 26, 10, 39, 43, 241449298, time.Local), LastTimestamp:time.Date(2024, time.February, 26, 10, 39, 43, 241449298, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-366900"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Feb 26 10:40:01 functional-366900 kubelet[2672]: I0226 10:40:01.947836    2672 scope.go:117] "RemoveContainer" containerID="62030b21b19be9c62da254abb35441a3d5d51d2ef5ab19e80e99c645abece52d"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:01.952917    2672 status_manager.go:853] "Failed to get status for pod" podUID="981e8281e5371a424725e53e791052d6" pod="kube-system/etcd-functional-366900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-366900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.033222    2672 status_manager.go:853] "Failed to get status for pod" podUID="46c8a7811ef1655878322390d2a81c7c" pod="kube-system/kube-controller-manager-functional-366900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-366900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.034238    2672 status_manager.go:853] "Failed to get status for pod" podUID="08db092730c2aa8c12611f56fef07b02" pod="kube-system/kube-apiserver-functional-366900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-366900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.034742    2672 status_manager.go:853] "Failed to get status for pod" podUID="aa4b9ca9-e541-47ee-8fe3-31fb2382a212" pod="kube-system/kube-proxy-k75mq" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-k75mq\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.035490    2672 status_manager.go:853] "Failed to get status for pod" podUID="748f3faf-1bf6-4894-aa38-e45189b52880" pod="kube-system/coredns-5dd5756b68-xnmfr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xnmfr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.035982    2672 status_manager.go:853] "Failed to get status for pod" podUID="8af976fb-796a-4d3b-a3db-c54011d75859" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.036388    2672 status_manager.go:853] "Failed to get status for pod" podUID="af8300354704da8bdab1ef7c0b0fc20c" pod="kube-system/kube-scheduler-functional-366900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-366900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.133606    2672 scope.go:117] "RemoveContainer" containerID="d419db5df87bd492f0314ada1d575a69e42bdaa85934d91a5482acbdfe785844"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.341430    2672 scope.go:117] "RemoveContainer" containerID="1e1a33d7969d131cff243a5e68400e64c3b86638cb963a3f83aa9e0b37798234"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.552914    2672 scope.go:117] "RemoveContainer" containerID="b7cb5544a63d644cd9f0dd9a2db077517b2011cdd052790c1dc51be028bb422a"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: I0226 10:40:02.735429    2672 scope.go:117] "RemoveContainer" containerID="f7a008aa2ee17189433dec77a253c5cf712baeebaa71f22298bafda8d629d66c"
	Feb 26 10:40:02 functional-366900 kubelet[2672]: E0226 10:40:02.735795    2672 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8af976fb-796a-4d3b-a3db-c54011d75859)\"" pod="kube-system/storage-provisioner" podUID="8af976fb-796a-4d3b-a3db-c54011d75859"
	Feb 26 10:40:04 functional-366900 kubelet[2672]: I0226 10:40:04.159806    2672 scope.go:117] "RemoveContainer" containerID="f7a008aa2ee17189433dec77a253c5cf712baeebaa71f22298bafda8d629d66c"
	Feb 26 10:40:04 functional-366900 kubelet[2672]: E0226 10:40:04.160348    2672 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8af976fb-796a-4d3b-a3db-c54011d75859)\"" pod="kube-system/storage-provisioner" podUID="8af976fb-796a-4d3b-a3db-c54011d75859"
	Feb 26 10:40:05 functional-366900 kubelet[2672]: I0226 10:40:05.181686    2672 scope.go:117] "RemoveContainer" containerID="f7a008aa2ee17189433dec77a253c5cf712baeebaa71f22298bafda8d629d66c"
	Feb 26 10:40:05 functional-366900 kubelet[2672]: E0226 10:40:05.182130    2672 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8af976fb-796a-4d3b-a3db-c54011d75859)\"" pod="kube-system/storage-provisioner" podUID="8af976fb-796a-4d3b-a3db-c54011d75859"
	Feb 26 10:40:06 functional-366900 kubelet[2672]: E0226 10:40:06.340284    2672 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 26 10:40:18 functional-366900 kubelet[2672]: I0226 10:40:18.951285    2672 scope.go:117] "RemoveContainer" containerID="f7a008aa2ee17189433dec77a253c5cf712baeebaa71f22298bafda8d629d66c"
	
	
	==> storage-provisioner [223ae12705e8] <==
	I0226 10:40:19.346680       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0226 10:40:19.363608       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0226 10:40:19.363786       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0226 10:40:36.787457       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0226 10:40:36.787937       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32aaad1e-3cd0-4f81-ab11-aefe6361be50", APIVersion:"v1", ResourceVersion:"524", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-366900_e1701da8-4ed9-4996-b2fc-6f08ab28a42a became leader
	I0226 10:40:36.788341       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-366900_e1701da8-4ed9-4996-b2fc-6f08ab28a42a!
	I0226 10:40:36.889942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-366900_e1701da8-4ed9-4996-b2fc-6f08ab28a42a!
	
	
	==> storage-provisioner [f7a008aa2ee1] <==
	I0226 10:40:02.141958       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0226 10:40:02.232908       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:40:42.289004    7692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-366900 -n functional-366900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-366900 -n functional-366900: (1.2971677s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-366900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (6.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-366900 config unset cpus" to be -""- but got *"W0226 10:41:45.264835    7836 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-366900 config get cpus: exit status 14 (264.2034ms)

                                                
                                                
** stderr ** 
	W0226 10:41:45.589255   10756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-366900 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0226 10:41:45.589255   10756 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-366900 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0226 10:41:45.853281    6868 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-366900 config get cpus" to be -""- but got *"W0226 10:41:46.165653   10244 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-366900 config unset cpus" to be -""- but got *"W0226 10:41:46.433621    9108 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-366900 config get cpus: exit status 14 (264.3101ms)

                                                
                                                
** stderr ** 
	W0226 10:41:46.716886    5748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-366900 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0226 10:41:46.716886    5748 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (574.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-313300 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0226 10:48:55.840455   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:51:45.604057   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:45.618714   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:45.634755   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:45.665493   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:45.711811   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:45.805228   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:45.976163   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:46.296695   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:46.945391   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:48.236085   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:50.801666   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:51:55.927145   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:52:06.177298   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:52:26.664196   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:53:07.627016   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:53:55.844124   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:54:29.550233   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:55:19.048997   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:56:45.612095   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 10:57:13.400044   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-313300 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: exit status 109 (9m33.6628904s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-313300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-313300 in cluster ingress-addon-legacy-313300
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 26 10:57:28 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:28.096586    5908 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-313300_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	  Feb 26 10:57:34 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:34.100138    5908 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-313300_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	  Feb 26 10:57:37 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:37.094240    5908 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-313300_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:48:26.039000    3800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 10:48:26.115789    3800 out.go:291] Setting OutFile to fd 308 ...
	I0226 10:48:26.116645    3800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:48:26.116645    3800 out.go:304] Setting ErrFile to fd 584...
	I0226 10:48:26.116645    3800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:48:26.136640    3800 out.go:298] Setting JSON to false
	I0226 10:48:26.138570    3800 start.go:129] hostinfo: {"hostname":"minikube7","uptime":1782,"bootTime":1708942723,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 10:48:26.138570    3800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 10:48:26.143507    3800 out.go:177] * [ingress-addon-legacy-313300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 10:48:26.149687    3800 notify.go:220] Checking for updates...
	I0226 10:48:26.152914    3800 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:48:26.154379    3800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 10:48:26.158206    3800 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 10:48:26.160619    3800 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 10:48:26.163275    3800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 10:48:26.167211    3800 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 10:48:26.445213    3800 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 10:48:26.454982    3800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:48:26.787709    3800 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:72 SystemTime:2024-02-26 10:48:26.749019921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:48:26.793701    3800 out.go:177] * Using the docker driver based on user configuration
	I0226 10:48:26.795709    3800 start.go:299] selected driver: docker
	I0226 10:48:26.795709    3800 start.go:903] validating driver "docker" against <nil>
	I0226 10:48:26.795709    3800 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 10:48:26.892367    3800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:48:27.223329    3800 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:72 SystemTime:2024-02-26 10:48:27.182477913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:48:27.223850    3800 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 10:48:27.225280    3800 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 10:48:27.228207    3800 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 10:48:27.230475    3800 cni.go:84] Creating CNI manager for ""
	I0226 10:48:27.230475    3800 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 10:48:27.230621    3800 start_flags.go:323] config:
	{Name:ingress-addon-legacy-313300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-313300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0226 10:48:27.232986    3800 out.go:177] * Starting control plane node ingress-addon-legacy-313300 in cluster ingress-addon-legacy-313300
	I0226 10:48:27.235226    3800 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 10:48:27.237425    3800 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 10:48:27.241502    3800 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0226 10:48:27.241502    3800 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 10:48:27.282570    3800 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0226 10:48:27.282570    3800 cache.go:56] Caching tarball of preloaded images
	I0226 10:48:27.283045    3800 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0226 10:48:27.287767    3800 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0226 10:48:27.289727    3800 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0226 10:48:27.358387    3800 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0226 10:48:27.421556    3800 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 10:48:27.422119    3800 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 10:48:30.552854    3800 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0226 10:48:30.554439    3800 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0226 10:48:31.616676    3800 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0226 10:48:31.617745    3800 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\config.json ...
	I0226 10:48:31.618260    3800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\config.json: {Name:mkc98c6a2f04cbeba297f505a83d4b34061f81d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:48:31.619095    3800 cache.go:194] Successfully downloaded all kic artifacts
	I0226 10:48:31.619095    3800 start.go:365] acquiring machines lock for ingress-addon-legacy-313300: {Name:mked5c0cf47aa2810e881a25cd859b4b1b0f7636 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 10:48:31.619095    3800 start.go:369] acquired machines lock for "ingress-addon-legacy-313300" in 0s
	I0226 10:48:31.619095    3800 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-313300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-313300 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 10:48:31.619095    3800 start.go:125] createHost starting for "" (driver="docker")
	I0226 10:48:31.622714    3800 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0226 10:48:31.623563    3800 start.go:159] libmachine.API.Create for "ingress-addon-legacy-313300" (driver="docker")
	I0226 10:48:31.623563    3800 client.go:168] LocalClient.Create starting
	I0226 10:48:31.624140    3800 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0226 10:48:31.624140    3800 main.go:141] libmachine: Decoding PEM data...
	I0226 10:48:31.624140    3800 main.go:141] libmachine: Parsing certificate...
	I0226 10:48:31.624810    3800 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0226 10:48:31.624984    3800 main.go:141] libmachine: Decoding PEM data...
	I0226 10:48:31.624984    3800 main.go:141] libmachine: Parsing certificate...
	I0226 10:48:31.634347    3800 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-313300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 10:48:31.796792    3800 cli_runner.go:211] docker network inspect ingress-addon-legacy-313300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 10:48:31.808336    3800 network_create.go:281] running [docker network inspect ingress-addon-legacy-313300] to gather additional debugging logs...
	I0226 10:48:31.808336    3800 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-313300
	W0226 10:48:31.968913    3800 cli_runner.go:211] docker network inspect ingress-addon-legacy-313300 returned with exit code 1
	I0226 10:48:31.968913    3800 network_create.go:284] error running [docker network inspect ingress-addon-legacy-313300]: docker network inspect ingress-addon-legacy-313300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-313300 not found
	I0226 10:48:31.968913    3800 network_create.go:286] output of [docker network inspect ingress-addon-legacy-313300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-313300 not found
	
	** /stderr **
	I0226 10:48:31.979208    3800 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 10:48:32.161977    3800 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023aa150}
	I0226 10:48:32.162073    3800 network_create.go:124] attempt to create docker network ingress-addon-legacy-313300 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0226 10:48:32.170876    3800 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-313300 ingress-addon-legacy-313300
	I0226 10:48:32.561503    3800 network_create.go:108] docker network ingress-addon-legacy-313300 192.168.49.0/24 created
	I0226 10:48:32.561736    3800 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-313300" container
	I0226 10:48:32.579714    3800 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 10:48:32.757013    3800 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-313300 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-313300 --label created_by.minikube.sigs.k8s.io=true
	I0226 10:48:32.917318    3800 oci.go:103] Successfully created a docker volume ingress-addon-legacy-313300
	I0226 10:48:32.924555    3800 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-313300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-313300 --entrypoint /usr/bin/test -v ingress-addon-legacy-313300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 10:48:35.308448    3800 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-313300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-313300 --entrypoint /usr/bin/test -v ingress-addon-legacy-313300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (2.383758s)
	I0226 10:48:35.308448    3800 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-313300
	I0226 10:48:35.308448    3800 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0226 10:48:35.308448    3800 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 10:48:35.318136    3800 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-313300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 10:49:04.225565    3800 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-313300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (28.9071479s)
	I0226 10:49:04.225722    3800 kic.go:203] duration metric: took 28.917173 seconds to extract preloaded images to volume
	I0226 10:49:04.238984    3800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:49:04.627738    3800 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:72 SystemTime:2024-02-26 10:49:04.588159928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:49:04.637942    3800 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 10:49:04.994116    3800 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-313300 --name ingress-addon-legacy-313300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-313300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-313300 --network ingress-addon-legacy-313300 --ip 192.168.49.2 --volume ingress-addon-legacy-313300:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 10:49:05.878883    3800 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-313300 --format={{.State.Running}}
	I0226 10:49:06.063767    3800 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-313300 --format={{.State.Status}}
	I0226 10:49:06.249859    3800 cli_runner.go:164] Run: docker exec ingress-addon-legacy-313300 stat /var/lib/dpkg/alternatives/iptables
	I0226 10:49:06.521151    3800 oci.go:144] the created container "ingress-addon-legacy-313300" has a running status.
	I0226 10:49:06.521151    3800 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa...
	I0226 10:49:06.643277    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0226 10:49:06.652389    3800 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 10:49:06.876834    3800 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-313300 --format={{.State.Status}}
	I0226 10:49:07.065338    3800 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 10:49:07.065338    3800 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-313300 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 10:49:07.341728    3800 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa...
	I0226 10:49:09.655895    3800 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-313300 --format={{.State.Status}}
	I0226 10:49:09.821529    3800 machine.go:88] provisioning docker machine ...
	I0226 10:49:09.821529    3800 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-313300"
	I0226 10:49:09.830156    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:10.000443    3800 main.go:141] libmachine: Using SSH client type: native
	I0226 10:49:10.014365    3800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51959 <nil> <nil>}
	I0226 10:49:10.014365    3800 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-313300 && echo "ingress-addon-legacy-313300" | sudo tee /etc/hostname
	I0226 10:49:10.231704    3800 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-313300
	
	I0226 10:49:10.241108    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:10.417176    3800 main.go:141] libmachine: Using SSH client type: native
	I0226 10:49:10.418176    3800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51959 <nil> <nil>}
	I0226 10:49:10.418176    3800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-313300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-313300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-313300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 10:49:10.605382    3800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 10:49:10.605918    3800 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0226 10:49:10.606011    3800 ubuntu.go:177] setting up certificates
	I0226 10:49:10.606011    3800 provision.go:83] configureAuth start
	I0226 10:49:10.615707    3800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-313300
	I0226 10:49:10.769458    3800 provision.go:138] copyHostCerts
	I0226 10:49:10.770003    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0226 10:49:10.770277    3800 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0226 10:49:10.770277    3800 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0226 10:49:10.770277    3800 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0226 10:49:10.771821    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0226 10:49:10.771821    3800 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0226 10:49:10.771821    3800 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0226 10:49:10.772630    3800 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0226 10:49:10.773305    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0226 10:49:10.773305    3800 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0226 10:49:10.773305    3800 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0226 10:49:10.774219    3800 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0226 10:49:10.775247    3800 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ingress-addon-legacy-313300 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-313300]
	I0226 10:49:11.112530    3800 provision.go:172] copyRemoteCerts
	I0226 10:49:11.125513    3800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 10:49:11.139937    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:11.288722    3800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51959 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa Username:docker}
	I0226 10:49:11.431007    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0226 10:49:11.431007    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0226 10:49:11.473739    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0226 10:49:11.474375    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0226 10:49:11.511746    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0226 10:49:11.512080    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 10:49:11.551215    3800 provision.go:86] duration metric: configureAuth took 945.2002ms
	I0226 10:49:11.551215    3800 ubuntu.go:193] setting minikube options for container-runtime
	I0226 10:49:11.551215    3800 config.go:182] Loaded profile config "ingress-addon-legacy-313300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 10:49:11.560634    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:11.725634    3800 main.go:141] libmachine: Using SSH client type: native
	I0226 10:49:11.726374    3800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51959 <nil> <nil>}
	I0226 10:49:11.726374    3800 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 10:49:11.916316    3800 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 10:49:11.916375    3800 ubuntu.go:71] root file system type: overlay
	I0226 10:49:11.916438    3800 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 10:49:11.925954    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:12.098797    3800 main.go:141] libmachine: Using SSH client type: native
	I0226 10:49:12.099251    3800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51959 <nil> <nil>}
	I0226 10:49:12.099346    3800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 10:49:12.313066    3800 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 10:49:12.326573    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:12.500777    3800 main.go:141] libmachine: Using SSH client type: native
	I0226 10:49:12.501395    3800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 51959 <nil> <nil>}
	I0226 10:49:12.501395    3800 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 10:49:13.741335    3800 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-26 10:49:12.298117578 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0226 10:49:13.741335    3800 machine.go:91] provisioned docker machine in 3.919792s
	I0226 10:49:13.741446    3800 client.go:171] LocalClient.Create took 42.1177362s
	I0226 10:49:13.741446    3800 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-313300" took 42.1177362s
	I0226 10:49:13.741446    3800 start.go:300] post-start starting for "ingress-addon-legacy-313300" (driver="docker")
	I0226 10:49:13.741446    3800 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 10:49:13.752694    3800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 10:49:13.760963    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:13.925754    3800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51959 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa Username:docker}
	I0226 10:49:14.084079    3800 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 10:49:14.094119    3800 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 10:49:14.094119    3800 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 10:49:14.094119    3800 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 10:49:14.094119    3800 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 10:49:14.094119    3800 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0226 10:49:14.097418    3800 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0226 10:49:14.098384    3800 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> 118682.pem in /etc/ssl/certs
	I0226 10:49:14.098384    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> /etc/ssl/certs/118682.pem
	I0226 10:49:14.114620    3800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 10:49:14.134019    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /etc/ssl/certs/118682.pem (1708 bytes)
	I0226 10:49:14.176375    3800 start.go:303] post-start completed in 434.9281ms
	I0226 10:49:14.188632    3800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-313300
	I0226 10:49:14.344658    3800 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\config.json ...
	I0226 10:49:14.357851    3800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 10:49:14.366408    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:14.532395    3800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51959 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa Username:docker}
	I0226 10:49:14.670142    3800 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 10:49:14.683976    3800 start.go:128] duration metric: createHost completed in 43.0647313s
	I0226 10:49:14.683976    3800 start.go:83] releasing machines lock for "ingress-addon-legacy-313300", held for 43.0647313s
	I0226 10:49:14.692761    3800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-313300
	I0226 10:49:14.860882    3800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 10:49:14.871425    3800 ssh_runner.go:195] Run: cat /version.json
	I0226 10:49:14.872209    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:14.879302    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:15.044937    3800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51959 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa Username:docker}
	I0226 10:49:15.059286    3800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51959 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa Username:docker}
	I0226 10:49:15.363627    3800 ssh_runner.go:195] Run: systemctl --version
	I0226 10:49:15.387513    3800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 10:49:15.411467    3800 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0226 10:49:15.429903    3800 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0226 10:49:15.441618    3800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 10:49:15.483055    3800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 10:49:15.511614    3800 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0226 10:49:15.511614    3800 start.go:475] detecting cgroup driver to use...
	I0226 10:49:15.511614    3800 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 10:49:15.511614    3800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 10:49:15.551873    3800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0226 10:49:15.583262    3800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 10:49:15.603392    3800 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 10:49:15.614827    3800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 10:49:15.647937    3800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 10:49:15.680665    3800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 10:49:15.714799    3800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 10:49:15.749012    3800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 10:49:15.781102    3800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 10:49:15.812400    3800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 10:49:15.842348    3800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 10:49:15.870494    3800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 10:49:16.020928    3800 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 10:49:16.163680    3800 start.go:475] detecting cgroup driver to use...
	I0226 10:49:16.163680    3800 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 10:49:16.177600    3800 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 10:49:16.203555    3800 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 10:49:16.214602    3800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 10:49:16.235535    3800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 10:49:16.275538    3800 ssh_runner.go:195] Run: which cri-dockerd
	I0226 10:49:16.303946    3800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 10:49:16.325232    3800 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 10:49:16.372821    3800 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 10:49:16.556750    3800 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 10:49:16.701489    3800 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 10:49:16.701489    3800 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 10:49:16.745187    3800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 10:49:16.883579    3800 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 10:49:17.417562    3800 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 10:49:17.481783    3800 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 10:49:17.531147    3800 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	I0226 10:49:17.539843    3800 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-313300 dig +short host.docker.internal
	I0226 10:49:17.798135    3800 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 10:49:17.811422    3800 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 10:49:17.824505    3800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 10:49:17.851766    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:49:18.015377    3800 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0226 10:49:18.025104    3800 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 10:49:18.066631    3800 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0226 10:49:18.066631    3800 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0226 10:49:18.078534    3800 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 10:49:18.109227    3800 ssh_runner.go:195] Run: which lz4
	I0226 10:49:18.121380    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0226 10:49:18.133431    3800 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0226 10:49:18.144510    3800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 10:49:18.144510    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0226 10:49:33.957762    3800 docker.go:649] Took 15.836117 seconds to copy over tarball
	I0226 10:49:33.967551    3800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 10:49:37.588818    3800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.6212539s)
	I0226 10:49:37.588818    3800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 10:49:37.682775    3800 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 10:49:37.703746    3800 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0226 10:49:37.748686    3800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 10:49:37.892462    3800 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 10:49:48.101750    3800 ssh_runner.go:235] Completed: sudo systemctl restart docker: (10.2083401s)
	I0226 10:49:48.112576    3800 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 10:49:48.154190    3800 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0226 10:49:48.154280    3800 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0226 10:49:48.154280    3800 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 10:49:48.169328    3800 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 10:49:48.173465    3800 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0226 10:49:48.175619    3800 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0226 10:49:48.178806    3800 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0226 10:49:48.180632    3800 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 10:49:48.180861    3800 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 10:49:48.180861    3800 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0226 10:49:48.181698    3800 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0226 10:49:48.184518    3800 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 10:49:48.188063    3800 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0226 10:49:48.192255    3800 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0226 10:49:48.195099    3800 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0226 10:49:48.196673    3800 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 10:49:48.198645    3800 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 10:49:48.202960    3800 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0226 10:49:48.208258    3800 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W0226 10:49:48.289209    3800 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 10:49:48.366110    3800 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 10:49:48.443434    3800 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 10:49:48.522381    3800 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 10:49:48.613963    3800 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 10:49:48.673413    3800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0226 10:49:48.690815    3800 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 10:49:48.726172    3800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0226 10:49:48.750970    3800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0226 10:49:48.767968    3800 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0226 10:49:48.767968    3800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0226 10:49:48.767968    3800 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0226 10:49:48.776978    3800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	W0226 10:49:48.782371    3800 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 10:49:48.793381    3800 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0226 10:49:48.793381    3800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.18.20 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.18.20
	I0226 10:49:48.793381    3800 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0226 10:49:48.801370    3800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0226 10:49:48.812365    3800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0226 10:49:48.815375    3800 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0226 10:49:48.841364    3800 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.18.20
	I0226 10:49:48.848369    3800 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0226 10:49:48.848369    3800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0226 10:49:48.848369    3800 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0226 10:49:48.860381    3800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0226 10:49:48.860381    3800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0226 10:49:48.861366    3800 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 10:49:48.894232    3800 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0226 10:49:48.894232    3800 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0226 10:49:48.894232    3800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.18.20 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20
	I0226 10:49:48.894770    3800 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 10:49:48.906928    3800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 10:49:48.910929    3800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0226 10:49:48.942522    3800 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20
	I0226 10:49:48.943522    3800 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0226 10:49:48.943522    3800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.18.20 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20
	I0226 10:49:48.944518    3800 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 10:49:48.952510    3800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0226 10:49:48.989966    3800 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20
	I0226 10:49:49.035374    3800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0226 10:49:49.063266    3800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0226 10:49:49.081077    3800 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0226 10:49:49.081077    3800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.7 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.7
	I0226 10:49:49.081077    3800 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0226 10:49:49.091820    3800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0226 10:49:49.112112    3800 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0226 10:49:49.112112    3800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.18.20 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.18.20
	I0226 10:49:49.112112    3800 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0226 10:49:49.121728    3800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0226 10:49:49.141513    3800 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.7
	I0226 10:49:49.161090    3800 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.18.20
	I0226 10:49:49.161090    3800 cache_images.go:92] LoadImages completed in 1.0067421s
	W0226 10:49:49.161674    3800 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2: The system cannot find the path specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2: The system cannot find the path specified.
	I0226 10:49:49.171801    3800 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 10:49:49.271629    3800 cni.go:84] Creating CNI manager for ""
	I0226 10:49:49.273350    3800 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 10:49:49.273455    3800 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 10:49:49.273639    3800 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-313300 NodeName:ingress-addon-legacy-313300 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 10:49:49.273834    3800 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-313300"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 10:49:49.274033    3800 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-313300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-313300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 10:49:49.286218    3800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0226 10:49:49.307232    3800 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 10:49:49.320599    3800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 10:49:49.340913    3800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0226 10:49:49.368358    3800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0226 10:49:49.398728    3800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0226 10:49:49.439030    3800 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0226 10:49:49.450734    3800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 10:49:49.470845    3800 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300 for IP: 192.168.49.2
	I0226 10:49:49.470933    3800 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:49:49.470933    3800 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 10:49:49.471906    3800 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0226 10:49:49.472559    3800 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\client.key
	I0226 10:49:49.473312    3800 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\client.crt with IP's: []
	I0226 10:49:49.664384    3800 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\client.crt ...
	I0226 10:49:49.664384    3800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\client.crt: {Name:mk46ff188a328daab9c7f993b171199de43559b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:49:49.665902    3800 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\client.key ...
	I0226 10:49:49.665902    3800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\client.key: {Name:mk799ac37db491b66eb62a145da37e43d2cb1ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:49:49.667055    3800 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.key.dd3b5fb2
	I0226 10:49:49.667398    3800 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 10:49:49.977624    3800 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.crt.dd3b5fb2 ...
	I0226 10:49:49.977624    3800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.crt.dd3b5fb2: {Name:mkd8a6595c245720284a1664b98fdd254304e5ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:49:49.979200    3800 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.key.dd3b5fb2 ...
	I0226 10:49:49.979200    3800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.key.dd3b5fb2: {Name:mk85b652f54aa84eb1f50beca08ae7f75e62bc66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:49:49.979443    3800 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.crt
	I0226 10:49:49.990150    3800 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.key
	I0226 10:49:49.992254    3800 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.key
	I0226 10:49:49.992254    3800 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.crt with IP's: []
	I0226 10:49:50.126966    3800 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.crt ...
	I0226 10:49:50.126966    3800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.crt: {Name:mk035de46796a9ec7f2eaf576baaf06706325604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:49:50.128795    3800 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.key ...
	I0226 10:49:50.128795    3800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.key: {Name:mk6924ddff764217d29ef289c6abed1a7c4e9a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 10:49:50.129931    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0226 10:49:50.130076    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0226 10:49:50.130323    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0226 10:49:50.138768    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0226 10:49:50.138768    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0226 10:49:50.139698    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0226 10:49:50.139698    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0226 10:49:50.139698    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0226 10:49:50.140423    3800 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 10:49:50.140973    3800 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 10:49:50.140973    3800 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 10:49:50.141332    3800 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 10:49:50.141538    3800 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 10:49:50.141770    3800 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 10:49:50.141986    3800 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 10:49:50.142419    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem -> /usr/share/ca-certificates/11868.pem
	I0226 10:49:50.142543    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> /usr/share/ca-certificates/118682.pem
	I0226 10:49:50.142679    3800 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:49:50.142822    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 10:49:50.190753    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 10:49:50.229431    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 10:49:50.270048    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-313300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 10:49:50.308367    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 10:49:50.345865    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 10:49:50.388997    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 10:49:50.428517    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 10:49:50.465904    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 10:49:50.504479    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 10:49:50.543413    3800 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 10:49:50.583968    3800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 10:49:50.625473    3800 ssh_runner.go:195] Run: openssl version
	I0226 10:49:50.650939    3800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 10:49:50.682982    3800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 10:49:50.697944    3800 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 10:49:50.708532    3800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 10:49:50.741411    3800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 10:49:50.773538    3800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 10:49:50.802789    3800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 10:49:50.815605    3800 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 10:49:50.826968    3800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 10:49:50.857501    3800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 10:49:50.888004    3800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 10:49:50.917390    3800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:49:50.931066    3800 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:49:50.941628    3800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 10:49:50.970365    3800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 10:49:50.999779    3800 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 10:49:51.013237    3800 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 10:49:51.013780    3800 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-313300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-313300 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:49:51.022618    3800 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 10:49:51.075926    3800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 10:49:51.108422    3800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 10:49:51.130990    3800 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 10:49:51.141605    3800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 10:49:51.160174    3800 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 10:49:51.160252    3800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 10:49:51.259442    3800 kubeadm.go:322] W0226 10:49:51.257519    1915 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0226 10:49:51.476936    3800 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 10:49:51.477271    3800 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 10:49:51.570823    3800 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0226 10:49:51.706464    3800 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 10:49:55.186555    3800 kubeadm.go:322] W0226 10:49:55.184926    1915 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 10:49:55.189099    3800 kubeadm.go:322] W0226 10:49:55.187669    1915 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 10:53:55.196331    3800 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 10:53:55.196983    3800 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 10:53:55.203379    3800 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0226 10:53:55.203379    3800 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 10:53:55.203379    3800 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 10:53:55.203379    3800 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 10:53:55.203379    3800 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 10:53:55.204957    3800 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 10:53:55.205244    3800 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 10:53:55.205304    3800 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 10:53:55.205304    3800 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 10:53:55.208352    3800 out.go:204]   - Generating certificates and keys ...
	I0226 10:53:55.208352    3800 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 10:53:55.208988    3800 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 10:53:55.209281    3800 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 10:53:55.209447    3800 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 10:53:55.209529    3800 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 10:53:55.209569    3800 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 10:53:55.209569    3800 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 10:53:55.209569    3800 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-313300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0226 10:53:55.209569    3800 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 10:53:55.210173    3800 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-313300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0226 10:53:55.210355    3800 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 10:53:55.210424    3800 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 10:53:55.210477    3800 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 10:53:55.210477    3800 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 10:53:55.210477    3800 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 10:53:55.210477    3800 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 10:53:55.210477    3800 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 10:53:55.210477    3800 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 10:53:55.210477    3800 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 10:53:55.213561    3800 out.go:204]   - Booting up control plane ...
	I0226 10:53:55.213561    3800 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 10:53:55.213561    3800 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 10:53:55.214202    3800 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 10:53:55.214202    3800 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 10:53:55.214202    3800 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 10:53:55.214816    3800 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 10:53:55.214870    3800 kubeadm.go:322] 
	I0226 10:53:55.214892    3800 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0226 10:53:55.214892    3800 kubeadm.go:322] 		timed out waiting for the condition
	I0226 10:53:55.214892    3800 kubeadm.go:322] 
	I0226 10:53:55.214892    3800 kubeadm.go:322] 	This error is likely caused by:
	I0226 10:53:55.214892    3800 kubeadm.go:322] 		- The kubelet is not running
	I0226 10:53:55.214892    3800 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 10:53:55.214892    3800 kubeadm.go:322] 
	I0226 10:53:55.215567    3800 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 10:53:55.215705    3800 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0226 10:53:55.215817    3800 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0226 10:53:55.215817    3800 kubeadm.go:322] 
	I0226 10:53:55.215817    3800 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 10:53:55.215817    3800 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0226 10:53:55.215817    3800 kubeadm.go:322] 
	I0226 10:53:55.216336    3800 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0226 10:53:55.216464    3800 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0226 10:53:55.216554    3800 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0226 10:53:55.216554    3800 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0226 10:53:55.216554    3800 kubeadm.go:322] 
	W0226 10:53:55.216554    3800 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-313300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-313300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:49:51.257519    1915 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:49:55.184926    1915 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:49:55.187669    1915 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-313300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-313300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:49:51.257519    1915 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:49:55.184926    1915 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:49:55.187669    1915 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 10:53:55.217076    3800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 10:53:56.654286    3800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.4370674s)
	I0226 10:53:56.666872    3800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 10:53:56.690317    3800 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 10:53:56.701147    3800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 10:53:56.719630    3800 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 10:53:56.719630    3800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 10:53:56.804414    3800 kubeadm.go:322] W0226 10:53:56.802974    5698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0226 10:53:57.009378    3800 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 10:53:57.009616    3800 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 10:53:57.101975    3800 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0226 10:53:57.248088    3800 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 10:53:58.829493    3800 kubeadm.go:322] W0226 10:53:58.828226    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 10:53:58.831344    3800 kubeadm.go:322] W0226 10:53:58.830021    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 10:57:58.843523    3800 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 10:57:58.843523    3800 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 10:57:58.849408    3800 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0226 10:57:58.849408    3800 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 10:57:58.849965    3800 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 10:57:58.850158    3800 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 10:57:58.850158    3800 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 10:57:58.850845    3800 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 10:57:58.850845    3800 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 10:57:58.850845    3800 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 10:57:58.850845    3800 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 10:57:58.856791    3800 out.go:204]   - Generating certificates and keys ...
	I0226 10:57:58.856916    3800 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 10:57:58.856916    3800 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 10:57:58.857446    3800 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 10:57:58.857685    3800 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 10:57:58.857685    3800 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 10:57:58.857685    3800 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 10:57:58.858345    3800 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 10:57:58.858504    3800 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 10:57:58.858504    3800 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 10:57:58.858504    3800 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 10:57:58.858504    3800 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 10:57:58.859079    3800 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 10:57:58.859220    3800 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 10:57:58.859220    3800 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 10:57:58.859220    3800 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 10:57:58.859220    3800 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 10:57:58.859799    3800 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 10:57:58.862613    3800 out.go:204]   - Booting up control plane ...
	I0226 10:57:58.862613    3800 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 10:57:58.863223    3800 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 10:57:58.863421    3800 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 10:57:58.863421    3800 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 10:57:58.863421    3800 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 10:57:58.864007    3800 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 10:57:58.864007    3800 kubeadm.go:322] 
	I0226 10:57:58.864007    3800 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0226 10:57:58.864208    3800 kubeadm.go:322] 		timed out waiting for the condition
	I0226 10:57:58.864359    3800 kubeadm.go:322] 
	I0226 10:57:58.864579    3800 kubeadm.go:322] 	This error is likely caused by:
	I0226 10:57:58.864712    3800 kubeadm.go:322] 		- The kubelet is not running
	I0226 10:57:58.864867    3800 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 10:57:58.864867    3800 kubeadm.go:322] 
	I0226 10:57:58.865041    3800 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 10:57:58.865041    3800 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0226 10:57:58.865316    3800 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0226 10:57:58.865395    3800 kubeadm.go:322] 
	I0226 10:57:58.865539    3800 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 10:57:58.865539    3800 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0226 10:57:58.865539    3800 kubeadm.go:322] 
	I0226 10:57:58.865539    3800 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0226 10:57:58.866128    3800 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0226 10:57:58.866128    3800 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0226 10:57:58.866128    3800 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0226 10:57:58.866128    3800 kubeadm.go:322] 
	I0226 10:57:58.866128    3800 kubeadm.go:406] StartCluster complete in 8m7.8504604s
	I0226 10:57:58.873841    3800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 10:57:58.912468    3800 logs.go:276] 0 containers: []
	W0226 10:57:58.912468    3800 logs.go:278] No container was found matching "kube-apiserver"
	I0226 10:57:58.921503    3800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 10:57:58.957272    3800 logs.go:276] 0 containers: []
	W0226 10:57:58.957272    3800 logs.go:278] No container was found matching "etcd"
	I0226 10:57:58.965843    3800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 10:57:59.000193    3800 logs.go:276] 0 containers: []
	W0226 10:57:59.000193    3800 logs.go:278] No container was found matching "coredns"
	I0226 10:57:59.008604    3800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 10:57:59.041875    3800 logs.go:276] 0 containers: []
	W0226 10:57:59.041875    3800 logs.go:278] No container was found matching "kube-scheduler"
	I0226 10:57:59.051262    3800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 10:57:59.092928    3800 logs.go:276] 0 containers: []
	W0226 10:57:59.092928    3800 logs.go:278] No container was found matching "kube-proxy"
	I0226 10:57:59.101930    3800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 10:57:59.142560    3800 logs.go:276] 0 containers: []
	W0226 10:57:59.142560    3800 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 10:57:59.151777    3800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 10:57:59.187939    3800 logs.go:276] 0 containers: []
	W0226 10:57:59.188039    3800 logs.go:278] No container was found matching "kindnet"
	I0226 10:57:59.188039    3800 logs.go:123] Gathering logs for kubelet ...
	I0226 10:57:59.188039    3800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 10:57:59.228784    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:28 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:28.096586    5908 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-313300_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0226 10:57:59.235469    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:34 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:34.100138    5908 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-313300_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0226 10:57:59.240342    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:37 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:37.094240    5908 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-313300_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0226 10:57:59.241806    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:38 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:38.093808    5908 pod_workers.go:191] Error syncing pod b63a9aaa3c42107b96bda45e8263f4be ("etcd-ingress-addon-legacy-313300_kube-system(b63a9aaa3c42107b96bda45e8263f4be)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0226 10:57:59.243307    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:39 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:39.102792    5908 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-313300_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0226 10:57:59.252397    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:48 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:48.095421    5908 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-313300_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0226 10:57:59.252652    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:48 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:48.096900    5908 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-313300_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0226 10:57:59.254333    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:49 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:49.099450    5908 pod_workers.go:191] Error syncing pod b63a9aaa3c42107b96bda45e8263f4be ("etcd-ingress-addon-legacy-313300_kube-system(b63a9aaa3c42107b96bda45e8263f4be)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0226 10:57:59.257327    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:50 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:50.094495    5908 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-313300_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0226 10:57:59.266114    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:59 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:59.120668    5908 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-313300_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0226 10:57:59.267096    3800 logs.go:138] Found kubelet problem: Feb 26 10:57:59 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:59.125019    5908 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-313300_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	I0226 10:57:59.267096    3800 logs.go:123] Gathering logs for dmesg ...
	I0226 10:57:59.267096    3800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 10:57:59.299253    3800 logs.go:123] Gathering logs for describe nodes ...
	I0226 10:57:59.299253    3800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 10:57:59.403613    3800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 10:57:59.403667    3800 logs.go:123] Gathering logs for Docker ...
	I0226 10:57:59.403667    3800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 10:57:59.439268    3800 logs.go:123] Gathering logs for container status ...
	I0226 10:57:59.439268    3800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0226 10:57:59.517080    3800 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:53:56.802974    5698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:53:58.828226    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:53:58.830021    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 10:57:59.517239    3800 out.go:239] * 
	* 
	W0226 10:57:59.517239    3800 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:53:56.802974    5698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:53:58.828226    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:53:58.830021    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:53:56.802974    5698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:53:58.828226    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:53:58.830021    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 10:57:59.517239    3800 out.go:239] * 
	* 
	W0226 10:57:59.519275    3800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 10:57:59.522176    3800 out.go:177] X Problems detected in kubelet:
	I0226 10:57:59.527834    3800 out.go:177]   Feb 26 10:57:28 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:28.096586    5908 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-313300_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	I0226 10:57:59.534390    3800 out.go:177]   Feb 26 10:57:34 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:34.100138    5908 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-313300_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	I0226 10:57:59.540216    3800 out.go:177]   Feb 26 10:57:37 ingress-addon-legacy-313300 kubelet[5908]: E0226 10:57:37.094240    5908 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-313300_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	I0226 10:57:59.547237    3800 out.go:177] 
	W0226 10:57:59.549796    3800 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:53:56.802974    5698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:53:58.828226    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:53:58.830021    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:53:56.802974    5698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:53:58.828226    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:53:58.830021    5698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 10:57:59.549796    3800 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 10:57:59.550820    3800 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 10:57:59.555191    3800 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-313300 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (574.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (27.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-313300 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-313300 addons enable ingress --alsologtostderr -v=5: exit status 1 (25.9825877s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:58:00.104628    9860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 10:58:00.193624    9860 out.go:291] Setting OutFile to fd 896 ...
	I0226 10:58:00.212361    9860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:58:00.212361    9860 out.go:304] Setting ErrFile to fd 924...
	I0226 10:58:00.212361    9860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:58:00.228902    9860 mustload.go:65] Loading cluster: ingress-addon-legacy-313300
	I0226 10:58:00.229573    9860 config.go:182] Loaded profile config "ingress-addon-legacy-313300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 10:58:00.229573    9860 addons.go:597] checking whether the cluster is paused
	I0226 10:58:00.230413    9860 config.go:182] Loaded profile config "ingress-addon-legacy-313300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 10:58:00.230538    9860 host.go:66] Checking if "ingress-addon-legacy-313300" exists ...
	I0226 10:58:00.245661    9860 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-313300 --format={{.State.Status}}
	I0226 10:58:00.425401    9860 ssh_runner.go:195] Run: systemctl --version
	I0226 10:58:00.433036    9860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:58:00.586459    9860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51959 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa Username:docker}
	I0226 10:58:00.719418    9860 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 10:58:00.770906    9860 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0226 10:58:00.774255    9860 config.go:182] Loaded profile config "ingress-addon-legacy-313300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 10:58:00.774255    9860 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-313300"
	I0226 10:58:00.774255    9860 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-313300"
	I0226 10:58:00.774255    9860 host.go:66] Checking if "ingress-addon-legacy-313300" exists ...
	I0226 10:58:00.791265    9860 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-313300 --format={{.State.Status}}
	I0226 10:58:00.960189    9860 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0226 10:58:00.963249    9860 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0226 10:58:00.967840    9860 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0226 10:58:00.970047    9860 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0226 10:58:00.973138    9860 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0226 10:58:00.973138    9860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0226 10:58:00.982068    9860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-313300
	I0226 10:58:01.158170    9860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51959 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-313300\id_rsa Username:docker}
	I0226 10:58:01.323979    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:01.419454    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:01.419454    9860 retry.go:31] will retry after 270.387646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:01.714994    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:01.812706    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:01.812805    9860 retry.go:31] will retry after 348.66897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:02.177129    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:02.271640    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:02.271640    9860 retry.go:31] will retry after 475.563971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:02.763303    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:02.856794    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:02.856794    9860 retry.go:31] will retry after 606.937986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:03.481236    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:03.590712    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:03.590712    9860 retry.go:31] will retry after 925.734685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:04.535867    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:04.629668    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:04.629727    9860 retry.go:31] will retry after 2.094866674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:06.743040    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:06.846824    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:06.846824    9860 retry.go:31] will retry after 3.031592364s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:09.904659    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:10.009016    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:10.009016    9860 retry.go:31] will retry after 5.291529746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:15.325577    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:15.426600    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:15.426600    9860 retry.go:31] will retry after 7.114306937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:22.553759    9860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 10:58:22.657172    9860 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 10:58:22.657172    9860 retry.go:31] will retry after 12.156293554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-313300
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-313300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cc49c8d00f5d6d51c322e7f7c9ad2e5ffd0de7e211ed11cd9a826fdd903967f9",
	        "Created": "2024-02-26T10:49:05.159583501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44504,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T10:49:05.811598586Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/cc49c8d00f5d6d51c322e7f7c9ad2e5ffd0de7e211ed11cd9a826fdd903967f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc49c8d00f5d6d51c322e7f7c9ad2e5ffd0de7e211ed11cd9a826fdd903967f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc49c8d00f5d6d51c322e7f7c9ad2e5ffd0de7e211ed11cd9a826fdd903967f9/hosts",
	        "LogPath": "/var/lib/docker/containers/cc49c8d00f5d6d51c322e7f7c9ad2e5ffd0de7e211ed11cd9a826fdd903967f9/cc49c8d00f5d6d51c322e7f7c9ad2e5ffd0de7e211ed11cd9a826fdd903967f9-json.log",
	        "Name": "/ingress-addon-legacy-313300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-313300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-313300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b34cc4484bbd82db092b7bc654c6c5ad93c3781741e6859275f04bc033988d35-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b34cc4484bbd82db092b7bc654c6c5ad93c3781741e6859275f04bc033988d35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b34cc4484bbd82db092b7bc654c6c5ad93c3781741e6859275f04bc033988d35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b34cc4484bbd82db092b7bc654c6c5ad93c3781741e6859275f04bc033988d35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-313300",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-313300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-313300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-313300",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-313300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8af2681da9cb83ed4e95267971e88719337af4d1447d9f312298d3d1b801c3c",
	            "SandboxKey": "/var/run/docker/netns/f8af2681da9c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51959"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51960"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51961"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51962"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51963"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-313300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cc49c8d00f5d",
	                        "ingress-addon-legacy-313300"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "3fc1af3358b4300e778b7beb76fbf6fd1659508f19cebba3913e4df74ebbd724",
	                    "EndpointID": "d76d808ad9202ae16cb86b2716aeb9b04d846bb9b0e4125358699d20c4f22d5e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-313300",
	                        "cc49c8d00f5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-313300 -n ingress-addon-legacy-313300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-313300 -n ingress-addon-legacy-313300: exit status 6 (1.1955735s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:58:26.271445    6560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0226 10:58:27.283716    6560 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-313300" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-313300" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (27.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (793.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (10m54.6045452s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-797800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-797800 in cluster kubernetes-upgrade-797800
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 26 11:37:42 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:42.019212    5932 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-797800_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:37:45 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:45.024222    5932 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-797800_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:37:49 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:49.018548    5932 pod_workers.go:191] Error syncing pod aed7597b27ada67581796293d695063e ("etcd-kubernetes-upgrade-797800_kube-system(aed7597b27ada67581796293d695063e)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:27:11.247835    6504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 11:27:11.371921    6504 out.go:291] Setting OutFile to fd 856 ...
	I0226 11:27:11.372916    6504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:27:11.372916    6504 out.go:304] Setting ErrFile to fd 780...
	I0226 11:27:11.372916    6504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:27:11.406659    6504 out.go:298] Setting JSON to false
	I0226 11:27:11.411286    6504 start.go:129] hostinfo: {"hostname":"minikube7","uptime":4108,"bootTime":1708942723,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 11:27:11.411517    6504 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 11:27:11.419545    6504 out.go:177] * [kubernetes-upgrade-797800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 11:27:11.423528    6504 notify.go:220] Checking for updates...
	I0226 11:27:11.427951    6504 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:27:11.435085    6504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:27:11.444709    6504 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 11:27:11.449387    6504 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:27:11.459297    6504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:27:11.466310    6504 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:27:11.825945    6504 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 11:27:11.834957    6504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:27:12.236725    6504 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:74 SystemTime:2024-02-26 11:27:12.189885578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:27:12.242465    6504 out.go:177] * Using the docker driver based on user configuration
	I0226 11:27:12.245629    6504 start.go:299] selected driver: docker
	I0226 11:27:12.245697    6504 start.go:903] validating driver "docker" against <nil>
	I0226 11:27:12.245824    6504 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:27:12.323749    6504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:27:12.721915    6504 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:74 SystemTime:2024-02-26 11:27:12.679088629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:27:12.721915    6504 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:27:12.722917    6504 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 11:27:13.065031    6504 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 11:27:13.073891    6504 cni.go:84] Creating CNI manager for ""
	I0226 11:27:13.073970    6504 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 11:27:13.073970    6504 start_flags.go:323] config:
	{Name:kubernetes-upgrade-797800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-797800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:27:13.085275    6504 out.go:177] * Starting control plane node kubernetes-upgrade-797800 in cluster kubernetes-upgrade-797800
	I0226 11:27:13.089669    6504 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 11:27:13.100907    6504 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:27:13.104904    6504 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:27:13.104904    6504 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 11:27:13.104904    6504 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 11:27:13.104904    6504 cache.go:56] Caching tarball of preloaded images
	I0226 11:27:13.104904    6504 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 11:27:13.105906    6504 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0226 11:27:13.105906    6504 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\config.json ...
	I0226 11:27:13.105906    6504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\config.json: {Name:mkc4205d4323f8ac8541272fd34f231562e15eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:27:13.300415    6504 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:27:13.300415    6504 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:27:13.300415    6504 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:27:13.301419    6504 start.go:365] acquiring machines lock for kubernetes-upgrade-797800: {Name:mk767d5872d6ac440ae57346a635588a2d8e30e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:27:13.301419    6504 start.go:369] acquired machines lock for "kubernetes-upgrade-797800" in 0s
	I0226 11:27:13.301419    6504 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-797800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-797800 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 11:27:13.301419    6504 start.go:125] createHost starting for "" (driver="docker")
	I0226 11:27:13.305413    6504 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0226 11:27:13.306421    6504 start.go:159] libmachine.API.Create for "kubernetes-upgrade-797800" (driver="docker")
	I0226 11:27:13.306421    6504 client.go:168] LocalClient.Create starting
	I0226 11:27:13.306421    6504 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0226 11:27:13.306421    6504 main.go:141] libmachine: Decoding PEM data...
	I0226 11:27:13.306421    6504 main.go:141] libmachine: Parsing certificate...
	I0226 11:27:13.306421    6504 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0226 11:27:13.307429    6504 main.go:141] libmachine: Decoding PEM data...
	I0226 11:27:13.307429    6504 main.go:141] libmachine: Parsing certificate...
	I0226 11:27:13.317410    6504 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-797800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 11:27:13.534207    6504 cli_runner.go:211] docker network inspect kubernetes-upgrade-797800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 11:27:13.542215    6504 network_create.go:281] running [docker network inspect kubernetes-upgrade-797800] to gather additional debugging logs...
	I0226 11:27:13.542215    6504 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-797800
	W0226 11:27:13.694032    6504 cli_runner.go:211] docker network inspect kubernetes-upgrade-797800 returned with exit code 1
	I0226 11:27:13.694131    6504 network_create.go:284] error running [docker network inspect kubernetes-upgrade-797800]: docker network inspect kubernetes-upgrade-797800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-797800 not found
	I0226 11:27:13.694131    6504 network_create.go:286] output of [docker network inspect kubernetes-upgrade-797800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-797800 not found
	
	** /stderr **
	I0226 11:27:13.705883    6504 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:27:13.956341    6504 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:27:13.988337    6504 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:27:14.012355    6504 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002383440}
	I0226 11:27:14.012355    6504 network_create.go:124] attempt to create docker network kubernetes-upgrade-797800 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0226 11:27:14.020347    6504 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-797800 kubernetes-upgrade-797800
	I0226 11:27:14.280556    6504 network_create.go:108] docker network kubernetes-upgrade-797800 192.168.67.0/24 created
	I0226 11:27:14.280556    6504 kic.go:121] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-797800" container
	I0226 11:27:14.300489    6504 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 11:27:14.541396    6504 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-797800 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-797800 --label created_by.minikube.sigs.k8s.io=true
	I0226 11:27:14.767679    6504 oci.go:103] Successfully created a docker volume kubernetes-upgrade-797800
	I0226 11:27:14.777677    6504 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-797800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-797800 --entrypoint /usr/bin/test -v kubernetes-upgrade-797800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 11:27:20.130751    6504 cli_runner.go:217] Completed: docker run --rm --name kubernetes-upgrade-797800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-797800 --entrypoint /usr/bin/test -v kubernetes-upgrade-797800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (5.3530409s)
	I0226 11:27:20.131804    6504 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-797800
	I0226 11:27:20.131804    6504 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 11:27:20.131888    6504 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 11:27:20.141142    6504 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-797800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 11:28:16.575094    6504 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-797800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (56.4336074s)
	I0226 11:28:16.575094    6504 kic.go:203] duration metric: took 56.442862 seconds to extract preloaded images to volume
	I0226 11:28:16.594134    6504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:28:17.196149    6504 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:79 SystemTime:2024-02-26 11:28:17.143837952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:28:17.213166    6504 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 11:28:17.876837    6504 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-797800 --name kubernetes-upgrade-797800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-797800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-797800 --network kubernetes-upgrade-797800 --ip 192.168.67.2 --volume kubernetes-upgrade-797800:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 11:28:20.288148    6504 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-797800 --name kubernetes-upgrade-797800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-797800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-797800 --network kubernetes-upgrade-797800 --ip 192.168.67.2 --volume kubernetes-upgrade-797800:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf: (2.4112967s)
	I0226 11:28:20.305137    6504 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-797800 --format={{.State.Running}}
	I0226 11:28:20.607182    6504 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-797800 --format={{.State.Status}}
	I0226 11:28:20.898061    6504 cli_runner.go:164] Run: docker exec kubernetes-upgrade-797800 stat /var/lib/dpkg/alternatives/iptables
	I0226 11:28:21.359318    6504 oci.go:144] the created container "kubernetes-upgrade-797800" has a running status.
	I0226 11:28:21.359318    6504 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa...
	I0226 11:28:21.671329    6504 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 11:28:21.979310    6504 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-797800 --format={{.State.Status}}
	I0226 11:28:22.220315    6504 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 11:28:22.220315    6504 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-797800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 11:28:22.621337    6504 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa...
	I0226 11:28:27.237103    6504 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-797800 --format={{.State.Status}}
	I0226 11:28:27.507820    6504 machine.go:88] provisioning docker machine ...
	I0226 11:28:27.507820    6504 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-797800"
	I0226 11:28:27.525880    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:27.803907    6504 main.go:141] libmachine: Using SSH client type: native
	I0226 11:28:27.820637    6504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53185 <nil> <nil>}
	I0226 11:28:27.820637    6504 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-797800 && echo "kubernetes-upgrade-797800" | sudo tee /etc/hostname
	I0226 11:28:28.086633    6504 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-797800
	
	I0226 11:28:28.103243    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:28.357241    6504 main.go:141] libmachine: Using SSH client type: native
	I0226 11:28:28.358204    6504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53185 <nil> <nil>}
	I0226 11:28:28.358204    6504 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-797800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-797800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-797800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 11:28:28.575228    6504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 11:28:28.575228    6504 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0226 11:28:28.575228    6504 ubuntu.go:177] setting up certificates
	I0226 11:28:28.575228    6504 provision.go:83] configureAuth start
	I0226 11:28:28.591223    6504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-797800
	I0226 11:28:28.842278    6504 provision.go:138] copyHostCerts
	I0226 11:28:28.843213    6504 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0226 11:28:28.843213    6504 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0226 11:28:28.843213    6504 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0226 11:28:28.845236    6504 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0226 11:28:28.845236    6504 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0226 11:28:28.846239    6504 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0226 11:28:28.847215    6504 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0226 11:28:28.847215    6504 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0226 11:28:28.847215    6504 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0226 11:28:28.849232    6504 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-797800 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-797800]
	I0226 11:28:28.953223    6504 provision.go:172] copyRemoteCerts
	I0226 11:28:28.972231    6504 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 11:28:28.988209    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:29.239236    6504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53185 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa Username:docker}
	I0226 11:28:29.404208    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0226 11:28:29.473881    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 11:28:29.532907    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0226 11:28:29.602898    6504 provision.go:86] duration metric: configureAuth took 1.027664s
	I0226 11:28:29.603880    6504 ubuntu.go:193] setting minikube options for container-runtime
	I0226 11:28:29.603880    6504 config.go:182] Loaded profile config "kubernetes-upgrade-797800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:28:29.618895    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:29.864889    6504 main.go:141] libmachine: Using SSH client type: native
	I0226 11:28:29.865891    6504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53185 <nil> <nil>}
	I0226 11:28:29.865891    6504 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 11:28:30.085894    6504 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 11:28:30.085894    6504 ubuntu.go:71] root file system type: overlay
	I0226 11:28:30.085894    6504 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 11:28:30.107895    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:30.358909    6504 main.go:141] libmachine: Using SSH client type: native
	I0226 11:28:30.358909    6504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53185 <nil> <nil>}
	I0226 11:28:30.358909    6504 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 11:28:30.594902    6504 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 11:28:30.610885    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:30.852898    6504 main.go:141] libmachine: Using SSH client type: native
	I0226 11:28:30.853889    6504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53185 <nil> <nil>}
	I0226 11:28:30.853889    6504 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 11:28:36.936326    6504 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-26 11:28:30.578673312 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0226 11:28:36.936326    6504 machine.go:91] provisioned docker machine in 9.4284482s
	I0226 11:28:36.936326    6504 client.go:171] LocalClient.Create took 1m23.6293952s
	I0226 11:28:36.936326    6504 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-797800" took 1m23.6293952s
	I0226 11:28:36.936326    6504 start.go:300] post-start starting for "kubernetes-upgrade-797800" (driver="docker")
	I0226 11:28:36.936326    6504 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 11:28:36.956311    6504 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 11:28:36.971320    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:37.228314    6504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53185 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa Username:docker}
	I0226 11:28:37.395954    6504 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 11:28:37.408976    6504 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 11:28:37.408976    6504 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 11:28:37.408976    6504 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 11:28:37.408976    6504 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 11:28:37.408976    6504 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0226 11:28:37.409957    6504 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0226 11:28:37.411001    6504 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> 118682.pem in /etc/ssl/certs
	I0226 11:28:37.431999    6504 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 11:28:37.456959    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /etc/ssl/certs/118682.pem (1708 bytes)
	I0226 11:28:37.516946    6504 start.go:303] post-start completed in 580.6168ms
	I0226 11:28:37.537965    6504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-797800
	I0226 11:28:37.765953    6504 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\config.json ...
	I0226 11:28:37.787958    6504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:28:37.804001    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:38.043949    6504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53185 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa Username:docker}
	I0226 11:28:38.213960    6504 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 11:28:38.228964    6504 start.go:128] duration metric: createHost completed in 1m24.9270279s
	I0226 11:28:38.228964    6504 start.go:83] releasing machines lock for "kubernetes-upgrade-797800", held for 1m24.9270279s
	I0226 11:28:38.247962    6504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-797800
	I0226 11:28:38.473955    6504 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 11:28:38.489966    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:38.492947    6504 ssh_runner.go:195] Run: cat /version.json
	I0226 11:28:38.509954    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:38.760954    6504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53185 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa Username:docker}
	I0226 11:28:38.773969    6504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53185 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa Username:docker}
	I0226 11:28:38.928968    6504 ssh_runner.go:195] Run: systemctl --version
	I0226 11:28:39.201955    6504 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 11:28:39.239004    6504 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0226 11:28:39.266986    6504 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0226 11:28:39.290954    6504 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 11:28:39.360105    6504 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 11:28:39.404139    6504 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0226 11:28:39.404139    6504 start.go:475] detecting cgroup driver to use...
	I0226 11:28:39.404139    6504 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:28:39.405115    6504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:28:39.473116    6504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0226 11:28:39.528106    6504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 11:28:39.567954    6504 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 11:28:39.596917    6504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 11:28:39.650973    6504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:28:39.698960    6504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 11:28:39.747946    6504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:28:39.794923    6504 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 11:28:39.838936    6504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 11:28:39.887945    6504 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 11:28:39.936919    6504 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 11:28:39.986942    6504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:28:40.187937    6504 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 11:28:40.423931    6504 start.go:475] detecting cgroup driver to use...
	I0226 11:28:40.423931    6504 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:28:40.443928    6504 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 11:28:40.480954    6504 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 11:28:40.504943    6504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 11:28:40.542938    6504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:28:40.621950    6504 ssh_runner.go:195] Run: which cri-dockerd
	I0226 11:28:40.665943    6504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 11:28:40.720962    6504 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 11:28:40.792947    6504 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 11:28:41.015927    6504 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 11:28:41.197418    6504 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 11:28:41.197418    6504 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 11:28:41.260180    6504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:28:41.473201    6504 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:28:45.608256    6504 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.1350296s)
	I0226 11:28:45.626190    6504 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:28:45.724732    6504 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:28:45.810715    6504 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0226 11:28:45.826726    6504 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-797800 dig +short host.docker.internal
	I0226 11:28:46.220377    6504 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 11:28:46.242433    6504 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 11:28:46.253369    6504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:28:46.303392    6504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:28:46.556374    6504 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 11:28:46.572391    6504 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:28:46.645375    6504 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 11:28:46.645375    6504 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 11:28:46.665372    6504 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 11:28:46.718376    6504 ssh_runner.go:195] Run: which lz4
	I0226 11:28:46.752401    6504 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0226 11:28:46.764417    6504 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 11:28:46.764417    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0226 11:29:01.194617    6504 docker.go:649] Took 14.464150 seconds to copy over tarball
	I0226 11:29:01.214609    6504 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 11:29:06.612159    6504 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.3975176s)
	I0226 11:29:06.612771    6504 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 11:29:06.710708    6504 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 11:29:06.732986    6504 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0226 11:29:06.780982    6504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:29:06.964437    6504 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:29:47.213780    6504 ssh_runner.go:235] Completed: sudo systemctl restart docker: (40.2490956s)
	I0226 11:29:47.226932    6504 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:29:47.270781    6504 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 11:29:47.270863    6504 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 11:29:47.270863    6504 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 11:29:47.293699    6504 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0226 11:29:47.293862    6504 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:29:47.299495    6504 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:29:47.299594    6504 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:29:47.300066    6504 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0226 11:29:47.300193    6504 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:29:47.300495    6504 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:29:47.305270    6504 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:29:47.311786    6504 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0226 11:29:47.311717    6504 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:29:47.318023    6504 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0226 11:29:47.318823    6504 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:29:47.323233    6504 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:29:47.325431    6504 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:29:47.327625    6504 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:29:47.328785    6504 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	W0226 11:29:47.403044    6504 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:29:47.498237    6504 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:29:47.590391    6504 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:29:47.683728    6504 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:29:47.776504    6504 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:29:47.830187    6504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:29:47.875024    6504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	W0226 11:29:47.884071    6504 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:29:47.929848    6504 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0226 11:29:47.929848    6504 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0226 11:29:47.929848    6504 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0226 11:29:47.937865    6504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0226 11:29:47.938845    6504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0226 11:29:47.940852    6504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	W0226 11:29:47.993139    6504 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:29:48.004433    6504 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0226 11:29:48.004433    6504 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0226 11:29:48.004433    6504 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0226 11:29:48.004433    6504 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0226 11:29:48.004433    6504 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:29:48.004433    6504 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0226 11:29:48.012461    6504 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0226 11:29:48.014378    6504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0226 11:29:48.017303    6504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0226 11:29:48.039893    6504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:29:48.106012    6504 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0226 11:29:48.106012    6504 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	W0226 11:29:48.115869    6504 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:29:48.117128    6504 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0226 11:29:48.117659    6504 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0226 11:29:48.117659    6504 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:29:48.128177    6504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:29:48.150988    6504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:29:48.168986    6504 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0226 11:29:48.193501    6504 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0226 11:29:48.196105    6504 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0226 11:29:48.196305    6504 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:29:48.210239    6504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:29:48.273273    6504 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0226 11:29:48.278265    6504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:29:48.314452    6504 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0226 11:29:48.314452    6504 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0226 11:29:48.314452    6504 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:29:48.322293    6504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:29:48.326653    6504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:29:48.362522    6504 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0226 11:29:48.362610    6504 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0226 11:29:48.362760    6504 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:29:48.369663    6504 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0226 11:29:48.376633    6504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:29:48.420195    6504 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0226 11:29:48.420501    6504 cache_images.go:92] LoadImages completed in 1.1496309s
	W0226 11:29:48.420682    6504 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1: The system cannot find the file specified.
	I0226 11:29:48.432634    6504 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 11:29:48.551123    6504 cni.go:84] Creating CNI manager for ""
	I0226 11:29:48.551229    6504 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 11:29:48.551396    6504 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:29:48.551396    6504 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-797800 NodeName:kubernetes-upgrade-797800 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 11:29:48.551679    6504 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-797800"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-797800
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:29:48.551824    6504 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-797800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-797800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:29:48.566449    6504 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0226 11:29:48.588406    6504 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:29:48.603267    6504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:29:48.627011    6504 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0226 11:29:48.665718    6504 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 11:29:48.703255    6504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0226 11:29:48.757180    6504 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:29:48.768166    6504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:29:48.788552    6504 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800 for IP: 192.168.67.2
	I0226 11:29:48.788666    6504 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:29:48.789587    6504 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 11:29:48.790244    6504 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0226 11:29:48.791467    6504 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\client.key
	I0226 11:29:48.791735    6504 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\client.crt with IP's: []
	I0226 11:29:48.951857    6504 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\client.crt ...
	I0226 11:29:48.951857    6504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\client.crt: {Name:mk5a6716c741bc8209308fe3534f7e7ec76300ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:29:48.952873    6504 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\client.key ...
	I0226 11:29:48.952873    6504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\client.key: {Name:mk8a44aa0eb4e8d5a894d66478625eb8f7f8d748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:29:48.954870    6504 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.key.c7fa3a9e
	I0226 11:29:48.954870    6504 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 11:29:49.582593    6504 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.crt.c7fa3a9e ...
	I0226 11:29:49.582593    6504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.crt.c7fa3a9e: {Name:mkb0a709b304da485d9d0a95f1a97b3f541d9925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:29:49.583579    6504 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.key.c7fa3a9e ...
	I0226 11:29:49.583579    6504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.key.c7fa3a9e: {Name:mk93d38284f284fc176bc94383c4a6f1cd84ce62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:29:49.584571    6504 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.crt
	I0226 11:29:49.602584    6504 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.key
	I0226 11:29:49.604591    6504 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\proxy-client.key
	I0226 11:29:49.604591    6504 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\proxy-client.crt with IP's: []
	I0226 11:29:49.878043    6504 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\proxy-client.crt ...
	I0226 11:29:49.878043    6504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\proxy-client.crt: {Name:mkcda0e36ce3f2ae4ee0f0d86968861245e7cf1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:29:49.880058    6504 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\proxy-client.key ...
	I0226 11:29:49.880058    6504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\proxy-client.key: {Name:mkcbd93269157da65cb00364e3cf21d891ff1fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:29:49.902631    6504 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 11:29:49.903492    6504 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 11:29:49.903697    6504 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 11:29:49.904047    6504 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 11:29:49.904520    6504 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 11:29:49.904817    6504 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 11:29:49.905750    6504 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 11:29:49.907900    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:29:49.967437    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 11:29:50.018360    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:29:50.086913    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-797800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 11:29:50.151506    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:29:50.208703    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:29:50.261195    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:29:50.312212    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:29:50.362559    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:29:50.417402    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 11:29:50.471555    6504 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 11:29:50.522549    6504 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:29:50.575977    6504 ssh_runner.go:195] Run: openssl version
	I0226 11:29:50.600979    6504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 11:29:50.641722    6504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 11:29:50.658629    6504 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 11:29:50.672722    6504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 11:29:50.714728    6504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 11:29:50.765181    6504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:29:50.799214    6504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:29:50.814187    6504 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:29:50.833195    6504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:29:50.864195    6504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:29:50.901190    6504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 11:29:50.944197    6504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 11:29:50.957206    6504 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 11:29:50.972190    6504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 11:29:51.003195    6504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 11:29:51.051220    6504 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:29:51.063199    6504 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 11:29:51.064212    6504 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-797800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-797800 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:29:51.074203    6504 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 11:29:51.137834    6504 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:29:51.177731    6504 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 11:29:51.194739    6504 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:29:51.208737    6504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:29:51.230685    6504 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:29:51.230685    6504 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:29:51.601367    6504 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 11:29:51.601589    6504 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 11:29:51.703656    6504 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 11:29:51.878046    6504 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:33:57.196182    6504 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 11:33:57.196182    6504 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 11:33:57.205986    6504 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 11:33:57.205986    6504 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:33:57.205986    6504 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:33:57.206603    6504 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:33:57.207603    6504 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:33:57.207757    6504 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:33:57.208588    6504 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:33:57.208588    6504 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 11:33:57.208588    6504 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:33:57.213074    6504 out.go:204]   - Generating certificates and keys ...
	I0226 11:33:57.214066    6504 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:33:57.214066    6504 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:33:57.214066    6504 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 11:33:57.214066    6504 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 11:33:57.215086    6504 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 11:33:57.215086    6504 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 11:33:57.215086    6504 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 11:33:57.215086    6504 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-797800 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 11:33:57.215086    6504 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 11:33:57.216065    6504 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-797800 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 11:33:57.216065    6504 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 11:33:57.216065    6504 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 11:33:57.216065    6504 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 11:33:57.217068    6504 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:33:57.217068    6504 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:33:57.217068    6504 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:33:57.217068    6504 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:33:57.217068    6504 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:33:57.218073    6504 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:33:57.222068    6504 out.go:204]   - Booting up control plane ...
	I0226 11:33:57.223060    6504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:33:57.223060    6504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:33:57.223060    6504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:33:57.223060    6504 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:33:57.224060    6504 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:33:57.224060    6504 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 11:33:57.224060    6504 kubeadm.go:322] 
	I0226 11:33:57.224060    6504 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 11:33:57.224060    6504 kubeadm.go:322] 	timed out waiting for the condition
	I0226 11:33:57.224060    6504 kubeadm.go:322] 
	I0226 11:33:57.224060    6504 kubeadm.go:322] This error is likely caused by:
	I0226 11:33:57.225073    6504 kubeadm.go:322] 	- The kubelet is not running
	I0226 11:33:57.225073    6504 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 11:33:57.225073    6504 kubeadm.go:322] 
	I0226 11:33:57.225073    6504 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 11:33:57.225073    6504 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 11:33:57.226061    6504 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 11:33:57.226061    6504 kubeadm.go:322] 
	I0226 11:33:57.226061    6504 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 11:33:57.227053    6504 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 11:33:57.227053    6504 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 11:33:57.227053    6504 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 11:33:57.227053    6504 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 11:33:57.227053    6504 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0226 11:33:57.228055    6504 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-797800 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-797800 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-797800 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-797800 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 11:33:57.228055    6504 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 11:34:00.107607    6504 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.879533s)
	I0226 11:34:00.128615    6504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:34:00.156598    6504 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:34:00.172609    6504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:34:00.194587    6504 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:34:00.194587    6504 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:34:00.618919    6504 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 11:34:00.618919    6504 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 11:34:00.765890    6504 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 11:34:01.002498    6504 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:38:04.689638    6504 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 11:38:04.689638    6504 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 11:38:04.692627    6504 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 11:38:04.693634    6504 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:38:04.693634    6504 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:38:04.693634    6504 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:38:04.693634    6504 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:38:04.694640    6504 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:38:04.694640    6504 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:38:04.694640    6504 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 11:38:04.694640    6504 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:38:04.778661    6504 out.go:204]   - Generating certificates and keys ...
	I0226 11:38:04.778661    6504 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:38:04.778661    6504 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:38:04.778661    6504 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 11:38:04.778661    6504 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 11:38:04.779652    6504 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 11:38:04.779652    6504 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 11:38:04.779652    6504 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 11:38:04.779652    6504 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 11:38:04.779652    6504 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 11:38:04.780642    6504 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 11:38:04.780642    6504 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 11:38:04.780642    6504 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:38:04.780642    6504 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:38:04.780642    6504 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:38:04.780642    6504 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:38:04.780642    6504 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:38:04.781660    6504 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:38:04.837647    6504 out.go:204]   - Booting up control plane ...
	I0226 11:38:04.837647    6504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:38:04.838666    6504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:38:04.838666    6504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:38:04.838666    6504 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:38:04.838666    6504 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:38:04.839671    6504 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 11:38:04.839671    6504 kubeadm.go:322] 
	I0226 11:38:04.839671    6504 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 11:38:04.839671    6504 kubeadm.go:322] 	timed out waiting for the condition
	I0226 11:38:04.839671    6504 kubeadm.go:322] 
	I0226 11:38:04.839671    6504 kubeadm.go:322] This error is likely caused by:
	I0226 11:38:04.839671    6504 kubeadm.go:322] 	- The kubelet is not running
	I0226 11:38:04.839671    6504 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 11:38:04.839671    6504 kubeadm.go:322] 
	I0226 11:38:04.840636    6504 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 11:38:04.840636    6504 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 11:38:04.840636    6504 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 11:38:04.840636    6504 kubeadm.go:322] 
	I0226 11:38:04.840636    6504 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 11:38:04.841635    6504 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 11:38:04.841635    6504 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 11:38:04.841635    6504 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 11:38:04.841635    6504 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 11:38:04.841635    6504 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 11:38:04.841635    6504 kubeadm.go:406] StartCluster complete in 8m13.7742853s
	I0226 11:38:04.854650    6504 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:38:04.903635    6504 logs.go:276] 0 containers: []
	W0226 11:38:04.903635    6504 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:38:04.917637    6504 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:38:04.959666    6504 logs.go:276] 0 containers: []
	W0226 11:38:04.959666    6504 logs.go:278] No container was found matching "etcd"
	I0226 11:38:04.970650    6504 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:38:05.020651    6504 logs.go:276] 0 containers: []
	W0226 11:38:05.020651    6504 logs.go:278] No container was found matching "coredns"
	I0226 11:38:05.035653    6504 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:38:05.081633    6504 logs.go:276] 0 containers: []
	W0226 11:38:05.081633    6504 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:38:05.093641    6504 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:38:05.142947    6504 logs.go:276] 0 containers: []
	W0226 11:38:05.142947    6504 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:38:05.155943    6504 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:38:05.199618    6504 logs.go:276] 0 containers: []
	W0226 11:38:05.200619    6504 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:38:05.213623    6504 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:38:05.260616    6504 logs.go:276] 0 containers: []
	W0226 11:38:05.260616    6504 logs.go:278] No container was found matching "kindnet"
	I0226 11:38:05.260616    6504 logs.go:123] Gathering logs for kubelet ...
	I0226 11:38:05.260616    6504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:38:05.302621    6504 logs.go:138] Found kubelet problem: Feb 26 11:37:42 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:42.019212    5932 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-797800_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:38:05.310612    6504 logs.go:138] Found kubelet problem: Feb 26 11:37:45 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:45.024222    5932 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-797800_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:38:05.322616    6504 logs.go:138] Found kubelet problem: Feb 26 11:37:49 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:49.018548    5932 pod_workers.go:191] Error syncing pod aed7597b27ada67581796293d695063e ("etcd-kubernetes-upgrade-797800_kube-system(aed7597b27ada67581796293d695063e)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:38:05.339632    6504 logs.go:138] Found kubelet problem: Feb 26 11:37:54 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:54.083788    5932 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-797800_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:38:05.344622    6504 logs.go:138] Found kubelet problem: Feb 26 11:37:55 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:55.101569    5932 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-kubernetes-upgrade-797800_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:38:05.357613    6504 logs.go:138] Found kubelet problem: Feb 26 11:38:00 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:38:00.023957    5932 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-797800_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:38:05.366612    6504 logs.go:138] Found kubelet problem: Feb 26 11:38:04 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:38:04.025586    5932 pod_workers.go:191] Error syncing pod aed7597b27ada67581796293d695063e ("etcd-kubernetes-upgrade-797800_kube-system(aed7597b27ada67581796293d695063e)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:38:05.369619    6504 logs.go:138] Found kubelet problem: Feb 26 11:38:05 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:38:05.030321    5932 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-797800_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:38:05.370636    6504 logs.go:123] Gathering logs for dmesg ...
	I0226 11:38:05.370636    6504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:38:05.403620    6504 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:38:05.403620    6504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:38:05.536603    6504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:38:05.536603    6504 logs.go:123] Gathering logs for Docker ...
	I0226 11:38:05.536603    6504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:38:05.571602    6504 logs.go:123] Gathering logs for container status ...
	I0226 11:38:05.571602    6504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0226 11:38:05.660605    6504 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 11:38:05.660605    6504 out.go:239] * 
	* 
	W0226 11:38:05.660605    6504 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 11:38:05.660605    6504 out.go:239] * 
	* 
	W0226 11:38:05.662601    6504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 11:38:05.665620    6504 out.go:177] X Problems detected in kubelet:
	I0226 11:38:05.670606    6504 out.go:177]   Feb 26 11:37:42 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:42.019212    5932 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-797800_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:38:05.676620    6504 out.go:177]   Feb 26 11:37:45 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:45.024222    5932 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-797800_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:38:05.681607    6504 out.go:177]   Feb 26 11:37:49 kubernetes-upgrade-797800 kubelet[5932]: E0226 11:37:49.018548    5932 pod_workers.go:191] Error syncing pod aed7597b27ada67581796293d695063e ("etcd-kubernetes-upgrade-797800_kube-system(aed7597b27ada67581796293d695063e)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:38:05.687625    6504 out.go:177] 
	W0226 11:38:05.690612    6504 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 11:38:05.690612    6504 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 11:38:05.691660    6504 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 11:38:05.693628    6504 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-797800
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-797800: (4.658311s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-797800 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-797800 status --format={{.Host}}: exit status 7 (502.5485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:38:11.026697   10712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker
E0226 11:38:55.860537   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker: (1m14.2313365s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-797800 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (280.1656ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-797800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:39:25.908033   10472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-797800
	    minikube start -p kubernetes-upgrade-797800 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7978002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-797800 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-797800 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker: (45.2577969s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-26 11:40:11.3366852 +0000 UTC m=+4499.669919301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-797800
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-797800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a5fe004dea439573dbc9bbfd48d556f4e0555d572d10f35add1d7be26e9735b",
	        "Created": "2024-02-26T11:28:18.119712342Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 217124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:38:14.19266956Z",
	            "FinishedAt": "2024-02-26T11:38:09.196934389Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/2a5fe004dea439573dbc9bbfd48d556f4e0555d572d10f35add1d7be26e9735b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a5fe004dea439573dbc9bbfd48d556f4e0555d572d10f35add1d7be26e9735b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a5fe004dea439573dbc9bbfd48d556f4e0555d572d10f35add1d7be26e9735b/hosts",
	        "LogPath": "/var/lib/docker/containers/2a5fe004dea439573dbc9bbfd48d556f4e0555d572d10f35add1d7be26e9735b/2a5fe004dea439573dbc9bbfd48d556f4e0555d572d10f35add1d7be26e9735b-json.log",
	        "Name": "/kubernetes-upgrade-797800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-797800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-797800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ce1f35473bdf0fead9525e7e67390606941f8c766458a92b3126b98f0dbdaab2-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce1f35473bdf0fead9525e7e67390606941f8c766458a92b3126b98f0dbdaab2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce1f35473bdf0fead9525e7e67390606941f8c766458a92b3126b98f0dbdaab2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce1f35473bdf0fead9525e7e67390606941f8c766458a92b3126b98f0dbdaab2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-797800",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-797800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-797800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-797800",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-797800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dcbfee5bd3d438d1687a7229185bfb089319cb7cf847b18fa1d6fe9ef866e488",
	            "SandboxKey": "/var/run/docker/netns/dcbfee5bd3d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53950"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53951"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53952"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53953"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53954"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-797800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2a5fe004dea4",
	                        "kubernetes-upgrade-797800"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "1b482adff3cdf156a541334486f1dc6465394184b5139de4876cb165baa9edc1",
	                    "EndpointID": "ac133cfc6d027696335010cb0c23fb1a80a32d25da118e29a9d78942e6df1c74",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-797800",
	                        "2a5fe004dea4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-797800 -n kubernetes-upgrade-797800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-797800 -n kubernetes-upgrade-797800: (1.612784s)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-797800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-797800 logs -n 25: (2.6807851s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| unpause | -p pause-268400                                        | pause-268400              | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:35 UTC | 26 Feb 24 11:35 UTC |
	|         | --alsologtostderr -v=5                                 |                           |                   |         |                     |                     |
	| pause   | -p pause-268400                                        | pause-268400              | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:35 UTC | 26 Feb 24 11:35 UTC |
	|         | --alsologtostderr -v=5                                 |                           |                   |         |                     |                     |
	| delete  | -p pause-268400                                        | pause-268400              | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:35 UTC | 26 Feb 24 11:36 UTC |
	|         | --alsologtostderr -v=5                                 |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-784500                               | force-systemd-env-784500  | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:36 UTC | 26 Feb 24 11:36 UTC |
	|         | ssh docker info --format                               |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                                      |                           |                   |         |                     |                     |
	| delete  | -p pause-268400                                        | pause-268400              | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:36 UTC | 26 Feb 24 11:36 UTC |
	| delete  | -p force-systemd-env-784500                            | force-systemd-env-784500  | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:36 UTC | 26 Feb 24 11:36 UTC |
	| start   | -p docker-flags-256600                                 | docker-flags-256600       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:36 UTC | 26 Feb 24 11:37 UTC |
	|         | --cache-images=false                                   |                           |                   |         |                     |                     |
	|         | --memory=2048                                          |                           |                   |         |                     |                     |
	|         | --install-addons=false                                 |                           |                   |         |                     |                     |
	|         | --wait=false                                           |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                                   |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                                   |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                                     |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                                 |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	| start   | -p cert-options-380200                                 | cert-options-380200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:36 UTC | 26 Feb 24 11:37 UTC |
	|         | --memory=2048                                          |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|         | --apiserver-name=localhost                             |                           |                   |         |                     |                     |
	| ssh     | docker-flags-256600 ssh                                | docker-flags-256600       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:37 UTC | 26 Feb 24 11:37 UTC |
	|         | sudo systemctl show docker                             |                           |                   |         |                     |                     |
	|         | --property=Environment                                 |                           |                   |         |                     |                     |
	|         | --no-pager                                             |                           |                   |         |                     |                     |
	| ssh     | docker-flags-256600 ssh                                | docker-flags-256600       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:37 UTC | 26 Feb 24 11:37 UTC |
	|         | sudo systemctl show docker                             |                           |                   |         |                     |                     |
	|         | --property=ExecStart                                   |                           |                   |         |                     |                     |
	|         | --no-pager                                             |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-256600                                 | docker-flags-256600       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:37 UTC | 26 Feb 24 11:37 UTC |
	| ssh     | cert-options-380200 ssh                                | cert-options-380200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:37 UTC | 26 Feb 24 11:37 UTC |
	|         | openssl x509 -text -noout -in                          |                           |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |                   |         |                     |                     |
	| ssh     | -p cert-options-380200 -- sudo                         | cert-options-380200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:37 UTC | 26 Feb 24 11:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |                   |         |                     |                     |
	| delete  | -p cert-options-380200                                 | cert-options-380200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:37 UTC | 26 Feb 24 11:37 UTC |
	| start   | -p old-k8s-version-321200                              | old-k8s-version-321200    | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:37 UTC |                     |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                           |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |                   |         |                     |                     |
	|         | --keep-context=false                                   |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                           |                   |         |                     |                     |
	| start   | -p no-preload-279800                                   | no-preload-279800         | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:37 UTC | 26 Feb 24 11:39 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-797800                           | kubernetes-upgrade-797800 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:38 UTC | 26 Feb 24 11:38 UTC |
	| start   | -p kubernetes-upgrade-797800                           | kubernetes-upgrade-797800 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:38 UTC | 26 Feb 24 11:39 UTC |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-797800                           | kubernetes-upgrade-797800 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-797800                           | kubernetes-upgrade-797800 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:39 UTC | 26 Feb 24 11:40 UTC |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	| start   | -p cert-expiration-720300                              | cert-expiration-720300    | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:39 UTC |                     |
	|         | --memory=2048                                          |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-279800             | no-preload-279800         | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:39 UTC | 26 Feb 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |                   |         |                     |                     |
	| stop    | -p no-preload-279800                                   | no-preload-279800         | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:39 UTC | 26 Feb 24 11:40 UTC |
	|         | --alsologtostderr -v=3                                 |                           |                   |         |                     |                     |
	| addons  | enable dashboard -p no-preload-279800                  | no-preload-279800         | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:40 UTC | 26 Feb 24 11:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |                   |         |                     |                     |
	| start   | -p no-preload-279800                                   | no-preload-279800         | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:40 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |                   |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:40:09
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:40:09.471636   11208 out.go:291] Setting OutFile to fd 1624 ...
	I0226 11:40:09.471636   11208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:40:09.471636   11208 out.go:304] Setting ErrFile to fd 1920...
	I0226 11:40:09.471636   11208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:40:09.491640   11208 out.go:298] Setting JSON to false
	I0226 11:40:09.496663   11208 start.go:129] hostinfo: {"hostname":"minikube7","uptime":4886,"bootTime":1708942723,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 11:40:09.496663   11208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 11:40:09.500640   11208 out.go:177] * [no-preload-279800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 11:40:09.275660    6644 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-797800 --format={{.State.Status}}
	I0226 11:40:09.277682    6644 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-797800 --format={{.State.Status}}
	I0226 11:40:09.298667    6644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:40:09.405641    6644 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0226 11:40:09.418645    6644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:40:09.487660    6644 kapi.go:59] client config for kubernetes-upgrade-797800: &rest.Config{Host:"https://127.0.0.1:53954", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-797800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-797800\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x251e0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 11:40:09.488662    6644 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-797800"
	W0226 11:40:09.488662    6644 addons.go:243] addon default-storageclass should already be in state true
	I0226 11:40:09.488662    6644 host.go:66] Checking if "kubernetes-upgrade-797800" exists ...
	I0226 11:40:09.504640    6644 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:40:09.504640   11208 notify.go:220] Checking for updates...
	I0226 11:40:09.507636   11208 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:40:09.510636   11208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:40:09.517649   11208 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 11:40:09.521658   11208 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:40:09.524640   11208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:40:09.527650   11208 config.go:182] Loaded profile config "no-preload-279800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 11:40:09.528644   11208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:40:09.818037   11208 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 11:40:09.829503   11208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:40:10.196285   11208 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:87 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:40:10.15422049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Index
ServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:D
ocker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings
:<nil>}}
	I0226 11:40:10.202922   11208 out.go:177] * Using the docker driver based on existing profile
	I0226 11:40:10.205000   11208 start.go:299] selected driver: docker
	I0226 11:40:10.205000   11208 start.go:903] validating driver "docker" against &{Name:no-preload-279800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-279800 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:40:10.205000   11208 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:40:10.281706   11208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:40:10.645083   11208 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:87 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:40:10.608664194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:40:10.645083   11208 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:40:10.645083   11208 cni.go:84] Creating CNI manager for ""
	I0226 11:40:10.645083   11208 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 11:40:10.645083   11208 start_flags.go:323] config:
	{Name:no-preload-279800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-279800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:40:10.652094   11208 out.go:177] * Starting control plane node no-preload-279800 in cluster no-preload-279800
	I0226 11:40:10.654088   11208 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 11:40:10.657099   11208 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:40:05.867583   11132 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0226 11:40:05.876587   11132 cli_runner.go:164] Run: docker exec -t cert-expiration-720300 dig +short host.docker.internal
	I0226 11:40:06.146546   11132 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 11:40:06.159024   11132 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 11:40:06.179493   11132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-720300
	I0226 11:40:06.347474   11132 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:40:06.357857   11132 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:40:06.407705   11132 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 11:40:06.407705   11132 docker.go:615] Images already preloaded, skipping extraction
	I0226 11:40:06.422697   11132 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:40:06.466352   11132 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 11:40:06.466352   11132 cache_images.go:84] Images are preloaded, skipping loading
	I0226 11:40:06.477105   11132 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 11:40:06.594658   11132 cni.go:84] Creating CNI manager for ""
	I0226 11:40:06.594658   11132 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 11:40:06.594658   11132 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:40:06.594658   11132 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-720300 NodeName:cert-expiration-720300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 11:40:06.594658   11132 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "cert-expiration-720300"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:40:06.594658   11132 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=cert-expiration-720300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-720300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:40:06.605687   11132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 11:40:06.625094   11132 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:40:06.642088   11132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:40:06.745037   11132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0226 11:40:06.782663   11132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 11:40:06.813930   11132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0226 11:40:06.858162   11132 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:40:06.870835   11132 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300 for IP: 192.168.76.2
	I0226 11:40:06.870835   11132 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:40:06.871617   11132 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 11:40:06.871617   11132 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	W0226 11:40:06.873051   11132 out.go:239] ! Certificate client.crt has expired. Generating a new one...
	I0226 11:40:06.873051   11132 certs.go:576] cert expired C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\client.crt: expiration: 2024-02-26 11:39:11 +0000 UTC, now: 2024-02-26 11:40:06.8730516 +0000 UTC m=+21.350619701
	I0226 11:40:06.874177   11132 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\client.key
	I0226 11:40:06.874213   11132 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\client.crt with IP's: []
	I0226 11:40:07.084082   11132 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\client.crt ...
	I0226 11:40:07.084082   11132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\client.crt: {Name:mkfad0b94d7d6ec98398d2173f53caab5e3d9218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:40:07.086060   11132 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\client.key ...
	I0226 11:40:07.086060   11132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\client.key: {Name:mk917e2dea881df87e11d4cf5c37f28d1ccb7533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0226 11:40:07.087078   11132 out.go:239] ! Certificate apiserver.crt.31bdca25 has expired. Generating a new one...
	I0226 11:40:07.087078   11132 certs.go:576] cert expired C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.crt.31bdca25: expiration: 2024-02-26 11:39:12 +0000 UTC, now: 2024-02-26 11:40:07.0870783 +0000 UTC m=+21.564644901
	I0226 11:40:07.088737   11132 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.key.31bdca25
	I0226 11:40:07.088737   11132 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 11:40:07.343014   11132 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.crt.31bdca25 ...
	I0226 11:40:07.343014   11132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.crt.31bdca25: {Name:mka2ba687e9007e01c83dc4b1bd04e8490ebc928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:40:07.344018   11132 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.key.31bdca25 ...
	I0226 11:40:07.344018   11132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.key.31bdca25: {Name:mkc2f7d4a5b2cb8ff9fb6e966510cabf03cc480c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:40:07.345013   11132 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.crt.31bdca25 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.crt
	I0226 11:40:07.358071   11132 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.key.31bdca25 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.key
	W0226 11:40:07.359070   11132 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0226 11:40:07.359070   11132 certs.go:576] cert expired C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.crt: expiration: 2024-02-26 11:39:12 +0000 UTC, now: 2024-02-26 11:40:07.3590707 +0000 UTC m=+21.836635501
	I0226 11:40:07.360026   11132 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.key
	I0226 11:40:07.360026   11132 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.crt with IP's: []
	I0226 11:40:07.484925   11132 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.crt ...
	I0226 11:40:07.484925   11132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.crt: {Name:mk6f53f824fc3bf335632ef3d6828adcb8529089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:40:07.485929   11132 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.key ...
	I0226 11:40:07.485929   11132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.key: {Name:mkf692f4ca7c7b637b6619a900ec17e43eda2b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:40:07.499000   11132 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 11:40:07.499000   11132 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 11:40:07.499000   11132 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 11:40:07.499000   11132 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 11:40:07.499000   11132 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 11:40:07.499992   11132 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 11:40:07.499992   11132 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 11:40:07.500991   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:40:07.627173   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 11:40:07.668856   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:40:07.711254   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cert-expiration-720300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 11:40:07.755924   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:40:07.803472   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:40:07.841504   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:40:08.231435   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:40:08.287417   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 11:40:08.339427   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:40:08.385658   11132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 11:40:08.439798   11132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:40:08.497384   11132 ssh_runner.go:195] Run: openssl version
	I0226 11:40:08.534262   11132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 11:40:08.571172   11132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 11:40:08.582169   11132 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 11:40:08.596037   11132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 11:40:08.627455   11132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 11:40:08.653512   11132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:40:08.682457   11132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:40:08.692459   11132 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:40:08.703462   11132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:40:08.731466   11132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:40:08.767496   11132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 11:40:08.799472   11132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 11:40:08.808479   11132 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 11:40:08.821469   11132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 11:40:08.855467   11132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 11:40:08.888458   11132 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:40:08.911468   11132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 11:40:08.949195   11132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 11:40:08.976569   11132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 11:40:09.007586   11132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 11:40:09.040585   11132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 11:40:09.074589   11132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 11:40:09.094589   11132 kubeadm.go:404] StartCluster: {Name:cert-expiration-720300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-720300 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:40:09.107582   11132 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 11:40:09.172284   11132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:40:09.192375   11132 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 11:40:09.192375   11132 kubeadm.go:636] restartCluster start
	I0226 11:40:09.209405   11132 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 11:40:09.234671   11132 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:40:09.245680   11132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-720300
	I0226 11:40:09.454639   11132 kubeconfig.go:92] found "cert-expiration-720300" server: "https://127.0.0.1:53792"
	I0226 11:40:09.471636   11132 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 11:40:09.495639   11132 api_server.go:166] Checking apiserver status ...
	I0226 11:40:09.514652   11132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:40:09.537651   11132 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:40:09.999996   11132 api_server.go:166] Checking apiserver status ...
	I0226 11:40:10.026010   11132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:40:10.050981   11132 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:40:10.496426   11132 api_server.go:166] Checking apiserver status ...
	I0226 11:40:10.514511   11132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:40:10.538461   11132 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:40:09.507636    6644 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 11:40:09.507636    6644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 11:40:09.512641    6644 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-797800 --format={{.State.Status}}
	I0226 11:40:09.517649    6644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:40:09.610798    6644 api_server.go:52] waiting for apiserver process to appear ...
	I0226 11:40:09.622643    6644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:40:09.647637    6644 api_server.go:72] duration metric: took 376.9703ms to wait for apiserver process to appear ...
	I0226 11:40:09.647637    6644 api_server.go:88] waiting for apiserver healthz status ...
	I0226 11:40:09.647637    6644 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53954/healthz ...
	I0226 11:40:09.661649    6644 api_server.go:279] https://127.0.0.1:53954/healthz returned 200:
	ok
	I0226 11:40:09.666654    6644 api_server.go:141] control plane version: v1.29.0-rc.2
	I0226 11:40:09.666654    6644 api_server.go:131] duration metric: took 19.0169ms to wait for apiserver health ...
	I0226 11:40:09.666654    6644 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 11:40:09.676654    6644 system_pods.go:59] 5 kube-system pods found
	I0226 11:40:09.676654    6644 system_pods.go:61] "etcd-kubernetes-upgrade-797800" [5b00d36f-bf4f-4810-8790-01f0c64a69c3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0226 11:40:09.676654    6644 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-797800" [0a6d3650-943f-46d1-abb6-82be4b2c0f45] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 11:40:09.676654    6644 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-797800" [6a558d08-b827-4c2b-a4f8-271cdbf96215] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0226 11:40:09.676654    6644 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-797800" [ac6277f5-fbc6-4ee7-b1f3-0b7285e13b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0226 11:40:09.676654    6644 system_pods.go:61] "storage-provisioner" [cd1f38e9-477c-4c69-9dae-7f69a323c6fa] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0226 11:40:09.676654    6644 system_pods.go:74] duration metric: took 10.0003ms to wait for pod list to return data ...
	I0226 11:40:09.676654    6644 kubeadm.go:581] duration metric: took 405.9875ms to wait for : map[apiserver:true system_pods:true] ...
	I0226 11:40:09.676654    6644 node_conditions.go:102] verifying NodePressure condition ...
	I0226 11:40:09.683640    6644 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0226 11:40:09.684656    6644 node_conditions.go:123] node cpu capacity is 16
	I0226 11:40:09.684656    6644 node_conditions.go:105] duration metric: took 8.0018ms to run NodePressure ...
	I0226 11:40:09.684656    6644 start.go:228] waiting for startup goroutines ...
	I0226 11:40:09.690640    6644 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 11:40:09.690640    6644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 11:40:09.699652    6644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-797800
	I0226 11:40:09.708649    6644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53950 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa Username:docker}
	I0226 11:40:09.864497    6644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53950 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-797800\id_rsa Username:docker}
	I0226 11:40:09.874506    6644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 11:40:10.044979    6644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 11:40:11.085668    6644 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.0406822s)
	I0226 11:40:11.085668    6644 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2111544s)
	I0226 11:40:11.101671    6644 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0226 11:40:10.659086   11208 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 11:40:10.659086   11208 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:40:10.659086   11208 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\config.json ...
	I0226 11:40:10.659086   11208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.9 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I0226 11:40:10.659086   11208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.29.0-rc.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.29.0-rc.2
	I0226 11:40:10.659086   11208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.10-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.10-0
	I0226 11:40:10.659086   11208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.29.0-rc.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.29.0-rc.2
	I0226 11:40:10.659086   11208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.29.0-rc.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.29.0-rc.2
	I0226 11:40:10.659086   11208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0226 11:40:10.659086   11208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.29.0-rc.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.29.0-rc.2
	I0226 11:40:10.659086   11208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.11.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.11.1
	I0226 11:40:10.847146   11208 cache.go:107] acquiring lock: {Name:mke57abc3f8620cf3666e7963f5b15b56de5e769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.847146   11208 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.29.0-rc.2 exists
	I0226 11:40:10.847146   11208 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.29.0-rc.2" took 188.059ms
	I0226 11:40:10.847146   11208 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.29.0-rc.2 succeeded
	I0226 11:40:10.856142   11208 cache.go:107] acquiring lock: {Name:mk8fcb07c9aa9509d98c1432a26cc5cdcfa07ce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.856142   11208 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.10-0 exists
	I0226 11:40:10.857149   11208 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.5.10-0" took 198.0624ms
	I0226 11:40:10.857149   11208 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.10-0 succeeded
	I0226 11:40:10.859137   11208 cache.go:107] acquiring lock: {Name:mkd701f5a5f1823ffc01cfd3cb7281a1d5318765 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.859137   11208 cache.go:107] acquiring lock: {Name:mkde01cc07f00e40968b1b751f96063bc588c624 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.860159   11208 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.29.0-rc.2 exists
	I0226 11:40:10.860159   11208 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.11.1 exists
	I0226 11:40:10.860159   11208 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.11.1" took 201.0719ms
	I0226 11:40:10.860159   11208 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.11.1 succeeded
	I0226 11:40:10.860159   11208 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.29.0-rc.2" took 201.0719ms
	I0226 11:40:10.860159   11208 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.29.0-rc.2 succeeded
	I0226 11:40:10.860159   11208 cache.go:107] acquiring lock: {Name:mk66aef28707119fac86a37df30f911185455e21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.860159   11208 cache.go:107] acquiring lock: {Name:mk6be2ebe09331e9d34aafc5a542b4c5ff665168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.860159   11208 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.29.0-rc.2 exists
	I0226 11:40:10.861142   11208 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.29.0-rc.2" took 202.055ms
	I0226 11:40:10.861142   11208 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.29.0-rc.2 succeeded
	I0226 11:40:10.861142   11208 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 exists
	I0226 11:40:10.861142   11208 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.9" took 202.055ms
	I0226 11:40:10.861142   11208 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 succeeded
	I0226 11:40:10.863147   11208 cache.go:107] acquiring lock: {Name:mk2bfa99bd9814727909a756b5654bb5142018fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.863147   11208 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.29.0-rc.2 exists
	I0226 11:40:10.863147   11208 cache.go:107] acquiring lock: {Name:mk43c24b3570a50e54ec9f1dc43aba5ea2e54859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.863147   11208 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.29.0-rc.2" took 204.0601ms
	I0226 11:40:10.863147   11208 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.29.0-rc.2 succeeded
	I0226 11:40:10.863147   11208 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0226 11:40:10.863147   11208 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 204.0601ms
	I0226 11:40:10.863147   11208 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0226 11:40:10.863147   11208 cache.go:87] Successfully saved all images to host disk.
	I0226 11:40:10.911048   11208 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:40:10.911182   11208 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:40:10.911182   11208 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:40:10.911278   11208 start.go:365] acquiring machines lock for no-preload-279800: {Name:mk224166009f46c9df63a917262fc46d987a8644 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:40:10.911425   11208 start.go:369] acquired machines lock for "no-preload-279800" in 146.9µs
	I0226 11:40:10.911496   11208 start.go:96] Skipping create...Using existing machine configuration
	I0226 11:40:10.911496   11208 fix.go:54] fixHost starting: 
	I0226 11:40:10.932223   11208 cli_runner.go:164] Run: docker container inspect no-preload-279800 --format={{.State.Status}}
	I0226 11:40:11.115691   11208 fix.go:102] recreateIfNeeded on no-preload-279800: state=Stopped err=<nil>
	W0226 11:40:11.115691   11208 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 11:40:11.121686   11208 out.go:177] * Restarting existing docker container for "no-preload-279800" ...
	I0226 11:40:11.103671    6644 addons.go:505] enable addons completed in 1.857978s: enabled=[storage-provisioner default-storageclass]
	I0226 11:40:11.103671    6644 start.go:233] waiting for cluster config update ...
	I0226 11:40:11.103671    6644 start.go:242] writing updated cluster config ...
	I0226 11:40:11.123665    6644 ssh_runner.go:195] Run: rm -f paused
	I0226 11:40:11.265662    6644 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0226 11:40:11.268661    6644 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-797800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 26 11:39:46 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:46Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 26 11:39:46 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:46Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 26 11:39:46 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:46Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 26 11:39:46 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:46Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 26 11:39:46 kubernetes-upgrade-797800 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 26 11:39:50 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51798862e91aab624e2a0d5253c1679eccc46c767acab7c0bd09b0a03cebd52a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:39:50 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/74f737e7e8765ffd166a6a8a444fae97bde3f9771477fdda7366dd22bf0dca2f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:39:50 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f9deddfff680e7c4d76071a7f906cc73f0fa470ad3ef2b753ea876ead1715749/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:39:50 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a42085c0fc073f19ec2416612c1dbf43436a22b24b336437050a847888cbb40e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:39:57 kubernetes-upgrade-797800 dockerd[3757]: time="2024-02-26T11:39:57.603117385Z" level=info msg="ignoring event" container=51798862e91aab624e2a0d5253c1679eccc46c767acab7c0bd09b0a03cebd52a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:39:57 kubernetes-upgrade-797800 dockerd[3757]: time="2024-02-26T11:39:57.785761183Z" level=info msg="ignoring event" container=81eaf01abc32906d2623e535f6f61b0c137294805cf84fa0926bdb0ca8f32169 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:39:57 kubernetes-upgrade-797800 dockerd[3757]: time="2024-02-26T11:39:57.789550358Z" level=info msg="ignoring event" container=618b2b69a48c9bdc7df37aa48d8367fb5c00bc1788cf7163dbb465e478f18a84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:39:57 kubernetes-upgrade-797800 dockerd[3757]: time="2024-02-26T11:39:57.791790902Z" level=info msg="ignoring event" container=f9deddfff680e7c4d76071a7f906cc73f0fa470ad3ef2b753ea876ead1715749 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:39:57 kubernetes-upgrade-797800 dockerd[3757]: time="2024-02-26T11:39:57.794902463Z" level=info msg="ignoring event" container=a42085c0fc073f19ec2416612c1dbf43436a22b24b336437050a847888cbb40e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:39:57 kubernetes-upgrade-797800 dockerd[3757]: time="2024-02-26T11:39:57.797989824Z" level=info msg="ignoring event" container=f3a1a45c1bc2bba848e8b066ca7c9b0a84cd194ccdb8976781da3c0080ff559c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:39:57 kubernetes-upgrade-797800 dockerd[3757]: time="2024-02-26T11:39:57.798042325Z" level=info msg="ignoring event" container=74f737e7e8765ffd166a6a8a444fae97bde3f9771477fdda7366dd22bf0dca2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:39:58 kubernetes-upgrade-797800 dockerd[3757]: time="2024-02-26T11:39:58.792955724Z" level=info msg="ignoring event" container=fa8d57e3dfbcd0c1d695c8ec1fccee31b0e07283a4b3e185551a0aed601ee1e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:39:59 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d5daedb34662235c020ff35677aee3b29d165152e688cf7a7efd71388e6fa7c5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:39:59 kubernetes-upgrade-797800 cri-dockerd[3998]: W0226 11:39:59.423554    3998 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 26 11:39:59 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/72e680c4aa8c66c385745f0193a4dc428659c577ff2a2de23a1e53fcd4e6e674/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:39:59 kubernetes-upgrade-797800 cri-dockerd[3998]: W0226 11:39:59.499640    3998 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 26 11:39:59 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:39:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a25578c520141c159d8b32892a98bfedd017d7c831e4ab30a68b27bd76a4cf8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:39:59 kubernetes-upgrade-797800 cri-dockerd[3998]: W0226 11:39:59.522071    3998 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 26 11:40:02 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:40:02Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"452ec814f8458be137de1018aa5066e9f4e703a257085639e27849d2eae85032\". Proceed without further sandbox information."
	Feb 26 11:40:02 kubernetes-upgrade-797800 cri-dockerd[3998]: time="2024-02-26T11:40:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76676e6c5f58d4b80f771e0dd34387fefdb1287f9fb05bc6e255dd83317163f7/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f2efdac5372c       bbb47a0f83324       12 seconds ago      Running             kube-apiserver            2                   76676e6c5f58d       kube-apiserver-kubernetes-upgrade-797800
	e8592a3547664       4270645ed6b7a       12 seconds ago      Running             kube-scheduler            2                   5a25578c52014       kube-scheduler-kubernetes-upgrade-797800
	83bfbaabdcdc0       d4e01cdf63970       12 seconds ago      Running             kube-controller-manager   2                   72e680c4aa8c6       kube-controller-manager-kubernetes-upgrade-797800
	647fcece2fbfa       a0eed15eed449       13 seconds ago      Running             etcd                      2                   d5daedb346622       etcd-kubernetes-upgrade-797800
	618b2b69a48c9       d4e01cdf63970       23 seconds ago      Exited              kube-controller-manager   1                   74f737e7e8765       kube-controller-manager-kubernetes-upgrade-797800
	81eaf01abc329       a0eed15eed449       23 seconds ago      Exited              etcd                      1                   f9deddfff680e       etcd-kubernetes-upgrade-797800
	f3a1a45c1bc2b       4270645ed6b7a       23 seconds ago      Exited              kube-scheduler            1                   a42085c0fc073       kube-scheduler-kubernetes-upgrade-797800
	fa8d57e3dfbcd       bbb47a0f83324       24 seconds ago      Exited              kube-apiserver            1                   51798862e91aa       kube-apiserver-kubernetes-upgrade-797800
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-797800
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-797800
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Feb 2024 11:39:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-797800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Feb 2024 11:40:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Feb 2024 11:40:07 +0000   Mon, 26 Feb 2024 11:39:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Feb 2024 11:40:07 +0000   Mon, 26 Feb 2024 11:39:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Feb 2024 11:40:07 +0000   Mon, 26 Feb 2024 11:39:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Feb 2024 11:40:07 +0000   Mon, 26 Feb 2024 11:39:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-797800
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868664Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868664Ki
	  pods:               110
	System Info:
	  Machine ID:                 96a0caa0ba874064ac82cfcf915e2d69
	  System UUID:                96a0caa0ba874064ac82cfcf915e2d69
	  Boot ID:                    cfe72d5e-3bc4-4cbf-8f9a-0bb1f1ad831b
	  Kernel Version:             5.15.133.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-797800                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         52s
	  kube-system                 kube-apiserver-kubernetes-upgrade-797800             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-797800    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kube-scheduler-kubernetes-upgrade-797800             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 63s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet  Node kubernetes-upgrade-797800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet  Node kubernetes-upgrade-797800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x7 over 63s)  kubelet  Node kubernetes-upgrade-797800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 14s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)  kubelet  Node kubernetes-upgrade-797800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet  Node kubernetes-upgrade-797800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x7 over 14s)  kubelet  Node kubernetes-upgrade-797800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	
	
	==> etcd [647fcece2fbf] <==
	{"level":"info","ts":"2024-02-26T11:40:02.774577Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-26T11:40:02.774596Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-26T11:40:02.774915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-02-26T11:40:02.77509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-02-26T11:40:02.775488Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T11:40:02.775618Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T11:40:02.778236Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-26T11:40:02.778294Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-26T11:40:02.778643Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-26T11:40:02.778729Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-26T11:40:02.778765Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-26T11:40:03.80006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-26T11:40:03.800185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-26T11:40:03.80023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-26T11:40:03.800248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-02-26T11:40:03.800255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-02-26T11:40:03.800265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2024-02-26T11:40:03.800273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-02-26T11:40:03.804994Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-797800 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T11:40:03.805123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:40:03.805374Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:40:03.805839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T11:40:03.805966Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-26T11:40:03.808154Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-26T11:40:03.809408Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [81eaf01abc32] <==
	{"level":"info","ts":"2024-02-26T11:39:51.999574Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-26T11:39:53.186811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-26T11:39:53.186882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-26T11:39:53.186909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-26T11:39:53.18693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-02-26T11:39:53.186963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-26T11:39:53.186983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-02-26T11:39:53.186999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-26T11:39:53.202961Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-797800 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T11:39:53.273853Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:39:53.274163Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:39:53.27465Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T11:39:53.274792Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-26T11:39:53.27999Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-26T11:39:53.280887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T11:39:57.47872Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-26T11:39:57.478791Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-797800","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-02-26T11:39:57.478901Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T11:39:57.479299Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T11:39:57.577052Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T11:39:57.577217Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-26T11:39:57.577508Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-02-26T11:39:57.59266Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-26T11:39:57.593086Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-26T11:39:57.593298Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-797800","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> kernel <==
	 11:40:15 up  1:20,  0 users,  load average: 5.65, 5.66, 4.25
	Linux kubernetes-upgrade-797800 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [8f2efdac5372] <==
	I0226 11:40:06.919656       1 controller.go:85] Starting OpenAPI V3 controller
	I0226 11:40:06.919811       1 available_controller.go:423] Starting AvailableConditionController
	I0226 11:40:06.919827       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0226 11:40:06.921081       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0226 11:40:06.921407       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0226 11:40:07.074360       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0226 11:40:07.074854       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0226 11:40:07.075519       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0226 11:40:07.075545       1 aggregator.go:165] initial CRD sync complete...
	I0226 11:40:07.075552       1 autoregister_controller.go:141] Starting autoregister controller
	I0226 11:40:07.075559       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0226 11:40:07.075570       1 cache.go:39] Caches are synced for autoregister controller
	I0226 11:40:07.085065       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0226 11:40:07.174950       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0226 11:40:07.175087       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0226 11:40:07.175326       1 shared_informer.go:318] Caches are synced for configmaps
	I0226 11:40:07.175364       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0226 11:40:07.175395       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0226 11:40:07.278313       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0226 11:40:07.926398       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0226 11:40:09.014034       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0226 11:40:09.037274       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0226 11:40:09.104598       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0226 11:40:09.179737       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0226 11:40:09.197207       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [fa8d57e3dfbc] <==
	W0226 11:39:58.487129       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.487231       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.487474       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.487573       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.487584       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.487640       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.487682       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.488088       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.488349       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.488494       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.488540       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.488635       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.488741       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489028       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489036       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489159       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489061       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489251       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489253       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489271       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489284       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489348       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489775       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489908       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:39:58.489957       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [618b2b69a48c] <==
	I0226 11:39:53.576872       1 serving.go:380] Generated self-signed cert in-memory
	I0226 11:39:54.379834       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0226 11:39:54.379892       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 11:39:54.382031       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0226 11:39:54.382184       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0226 11:39:54.382721       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0226 11:39:54.382848       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [83bfbaabdcdc] <==
	I0226 11:40:09.084614       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0226 11:40:09.084731       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0226 11:40:09.084829       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0226 11:40:09.086206       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0226 11:40:09.086331       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0226 11:40:09.086370       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0226 11:40:09.087640       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0226 11:40:09.087688       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0226 11:40:09.087782       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0226 11:40:09.088017       1 shared_informer.go:318] Caches are synced for tokens
	I0226 11:40:09.089153       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0226 11:40:09.089456       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0226 11:40:09.089477       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0226 11:40:09.089508       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0226 11:40:09.093959       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0226 11:40:09.094104       1 cleaner.go:83] "Starting CSR cleaner controller"
	E0226 11:40:09.098378       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0226 11:40:09.098538       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0226 11:40:09.098566       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0226 11:40:09.102874       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0226 11:40:09.103305       1 job_controller.go:224] "Starting job controller"
	I0226 11:40:09.103422       1 shared_informer.go:311] Waiting for caches to sync for job
	I0226 11:40:09.107687       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0226 11:40:09.107820       1 ttl_controller.go:124] "Starting TTL controller"
	I0226 11:40:09.108447       1 shared_informer.go:311] Waiting for caches to sync for TTL
	
	
	==> kube-scheduler [e8592a354766] <==
	I0226 11:40:04.122390       1 serving.go:380] Generated self-signed cert in-memory
	W0226 11:40:06.984571       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0226 11:40:06.984768       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0226 11:40:06.984794       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0226 11:40:06.984807       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0226 11:40:07.175121       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0226 11:40:07.175259       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 11:40:07.178101       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0226 11:40:07.178280       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 11:40:07.179013       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0226 11:40:07.179158       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0226 11:40:07.278608       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f3a1a45c1bc2] <==
	I0226 11:39:52.592030       1 serving.go:380] Generated self-signed cert in-memory
	W0226 11:39:55.887239       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0226 11:39:55.887311       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0226 11:39:55.887333       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0226 11:39:55.887347       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0226 11:39:56.083065       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0226 11:39:56.083181       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 11:39:56.086569       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0226 11:39:56.086983       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 11:39:56.088042       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0226 11:39:56.088096       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0226 11:39:56.188412       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 11:39:57.497898       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0226 11:39:57.574522       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0226 11:39:57.575675       1 run.go:74] "command failed" err="finished without leader elect"
	I0226 11:39:57.575898       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Feb 26 11:40:01 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:01.712799    5220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1f945cc2f09d9dffd81425b0ab975671fb94c3b73fa18e5f4e75429a6aecf0c"
	Feb 26 11:40:01 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:01.712808    5220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21122500e8c3d650a90c562e455645274df6474a1d230c125f15add23c763170"
	Feb 26 11:40:01 kubernetes-upgrade-797800 kubelet[5220]: E0226 11:40:01.802730    5220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-797800?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="800ms"
	Feb 26 11:40:01 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:01.978419    5220 scope.go:117] "RemoveContainer" containerID="81eaf01abc32906d2623e535f6f61b0c137294805cf84fa0926bdb0ca8f32169"
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:02.013727    5220 scope.go:117] "RemoveContainer" containerID="618b2b69a48c9bdc7df37aa48d8367fb5c00bc1788cf7163dbb465e478f18a84"
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:02.013759    5220 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-797800"
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: E0226 11:40:02.015355    5220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-797800"
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:02.027467    5220 scope.go:117] "RemoveContainer" containerID="f3a1a45c1bc2bba848e8b066ca7c9b0a84cd194ccdb8976781da3c0080ff559c"
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: W0226 11:40:02.170818    5220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: E0226 11:40:02.170897    5220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: W0226 11:40:02.385920    5220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: E0226 11:40:02.386023    5220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: W0226 11:40:02.392222    5220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-797800&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: E0226 11:40:02.392418    5220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-797800&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: E0226 11:40:02.604314    5220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-797800?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="1.6s"
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: W0226 11:40:02.774806    5220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: E0226 11:40:02.774999    5220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:02.894400    5220 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-797800"
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: E0226 11:40:02.895671    5220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-797800"
	Feb 26 11:40:02 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:02.975342    5220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76676e6c5f58d4b80f771e0dd34387fefdb1287f9fb05bc6e255dd83317163f7"
	Feb 26 11:40:04 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:04.508860    5220 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-797800"
	Feb 26 11:40:07 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:07.183959    5220 apiserver.go:52] "Watching apiserver"
	Feb 26 11:40:07 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:07.190305    5220 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-797800"
	Feb 26 11:40:07 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:07.190540    5220 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-797800"
	Feb 26 11:40:07 kubernetes-upgrade-797800 kubelet[5220]: I0226 11:40:07.200934    5220 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:40:13.268675    7736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-797800 -n kubernetes-upgrade-797800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-797800 -n kubernetes-upgrade-797800: (1.3689155s)
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-797800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-797800 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-797800 describe pod storage-provisioner: exit status 1 (196.1431ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-797800 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-797800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-797800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-797800: (6.6761934s)
--- FAIL: TestKubernetesUpgrade (793.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (565.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-321200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-321200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 109 (9m23.6020685s)

                                                
                                                
-- stdout --
	* [old-k8s-version-321200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-321200 in cluster old-k8s-version-321200
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 26 11:46:27 old-k8s-version-321200 kubelet[5770]: E0226 11:46:27.311901    5770 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:46:28 old-k8s-version-321200 kubelet[5770]: E0226 11:46:28.306593    5770 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:46:35 old-k8s-version-321200 kubelet[5770]: E0226 11:46:35.300820    5770 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:37:27.555366   13020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 11:37:27.644408   13020 out.go:291] Setting OutFile to fd 1924 ...
	I0226 11:37:27.645399   13020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:37:27.645399   13020 out.go:304] Setting ErrFile to fd 1960...
	I0226 11:37:27.645399   13020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:37:27.670312   13020 out.go:298] Setting JSON to false
	I0226 11:37:27.673545   13020 start.go:129] hostinfo: {"hostname":"minikube7","uptime":4724,"bootTime":1708942723,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 11:37:27.673545   13020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 11:37:28.021340   13020 out.go:177] * [old-k8s-version-321200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 11:37:28.028931   13020 notify.go:220] Checking for updates...
	I0226 11:37:28.031999   13020 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:37:28.037665   13020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:37:28.052130   13020 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 11:37:28.060933   13020 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:37:28.071503   13020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:37:28.076552   13020 config.go:182] Loaded profile config "cert-expiration-720300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:37:28.077298   13020 config.go:182] Loaded profile config "cert-options-380200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:37:28.077298   13020 config.go:182] Loaded profile config "kubernetes-upgrade-797800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:37:28.078183   13020 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:37:28.377445   13020 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 11:37:28.388883   13020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:37:28.795951   13020 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:92 SystemTime:2024-02-26 11:37:28.745162558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:37:28.801946   13020 out.go:177] * Using the docker driver based on user configuration
	I0226 11:37:28.805947   13020 start.go:299] selected driver: docker
	I0226 11:37:28.805947   13020 start.go:903] validating driver "docker" against <nil>
	I0226 11:37:28.805947   13020 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:37:28.886785   13020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:37:29.236673   13020 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:90 SystemTime:2024-02-26 11:37:29.194673336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:37:29.237310   13020 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:37:29.242951   13020 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:37:29.246746   13020 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 11:37:29.252339   13020 cni.go:84] Creating CNI manager for ""
	I0226 11:37:29.252339   13020 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 11:37:29.252339   13020 start_flags.go:323] config:
	{Name:old-k8s-version-321200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:37:29.260858   13020 out.go:177] * Starting control plane node old-k8s-version-321200 in cluster old-k8s-version-321200
	I0226 11:37:29.266524   13020 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 11:37:29.271112   13020 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:37:29.277599   13020 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 11:37:29.277599   13020 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:37:29.278756   13020 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 11:37:29.278756   13020 cache.go:56] Caching tarball of preloaded images
	I0226 11:37:29.278756   13020 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 11:37:29.279289   13020 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0226 11:37:29.279400   13020 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\config.json ...
	I0226 11:37:29.279536   13020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\config.json: {Name:mkaff96102becf5cfaf692b4242e82b960474d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:37:29.455934   13020 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:37:29.455934   13020 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:37:29.455934   13020 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:37:29.455934   13020 start.go:365] acquiring machines lock for old-k8s-version-321200: {Name:mk55b0c0259e24b9978efabbdf90c76106474ba0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:37:29.455934   13020 start.go:369] acquired machines lock for "old-k8s-version-321200" in 0s
	I0226 11:37:29.455934   13020 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-321200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 11:37:29.455934   13020 start.go:125] createHost starting for "" (driver="docker")
	I0226 11:37:29.462912   13020 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0226 11:37:29.462912   13020 start.go:159] libmachine.API.Create for "old-k8s-version-321200" (driver="docker")
	I0226 11:37:29.462912   13020 client.go:168] LocalClient.Create starting
	I0226 11:37:29.463590   13020 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0226 11:37:29.463590   13020 main.go:141] libmachine: Decoding PEM data...
	I0226 11:37:29.464118   13020 main.go:141] libmachine: Parsing certificate...
	I0226 11:37:29.464983   13020 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0226 11:37:29.464983   13020 main.go:141] libmachine: Decoding PEM data...
	I0226 11:37:29.464983   13020 main.go:141] libmachine: Parsing certificate...
	I0226 11:37:29.475840   13020 cli_runner.go:164] Run: docker network inspect old-k8s-version-321200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 11:37:29.656789   13020 cli_runner.go:211] docker network inspect old-k8s-version-321200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 11:37:29.665783   13020 network_create.go:281] running [docker network inspect old-k8s-version-321200] to gather additional debugging logs...
	I0226 11:37:29.665783   13020 cli_runner.go:164] Run: docker network inspect old-k8s-version-321200
	W0226 11:37:29.846833   13020 cli_runner.go:211] docker network inspect old-k8s-version-321200 returned with exit code 1
	I0226 11:37:29.846833   13020 network_create.go:284] error running [docker network inspect old-k8s-version-321200]: docker network inspect old-k8s-version-321200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-321200 not found
	I0226 11:37:29.846833   13020 network_create.go:286] output of [docker network inspect old-k8s-version-321200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-321200 not found
	
	** /stderr **
	I0226 11:37:29.860835   13020 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:37:30.087944   13020 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:37:30.111888   13020 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022d66c0}
	I0226 11:37:30.111888   13020 network_create.go:124] attempt to create docker network old-k8s-version-321200 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0226 11:37:30.124351   13020 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-321200 old-k8s-version-321200
	W0226 11:37:30.318680   13020 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-321200 old-k8s-version-321200 returned with exit code 1
	W0226 11:37:30.319653   13020 network_create.go:149] failed to create docker network old-k8s-version-321200 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-321200 old-k8s-version-321200: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0226 11:37:30.319653   13020 network_create.go:116] failed to create docker network old-k8s-version-321200 192.168.58.0/24, will retry: subnet is taken
	I0226 11:37:30.351339   13020 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:37:30.382354   13020 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:37:30.407902   13020 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021b4900}
	I0226 11:37:30.408034   13020 network_create.go:124] attempt to create docker network old-k8s-version-321200 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0226 11:37:30.419718   13020 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-321200 old-k8s-version-321200
	W0226 11:37:30.594098   13020 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-321200 old-k8s-version-321200 returned with exit code 1
	W0226 11:37:30.594098   13020 network_create.go:149] failed to create docker network old-k8s-version-321200 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-321200 old-k8s-version-321200: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0226 11:37:30.594098   13020 network_create.go:116] failed to create docker network old-k8s-version-321200 192.168.76.0/24, will retry: subnet is taken
	I0226 11:37:30.632853   13020 network.go:210] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:37:30.656597   13020 network.go:207] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021b50e0}
	I0226 11:37:30.656597   13020 network_create.go:124] attempt to create docker network old-k8s-version-321200 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0226 11:37:30.668032   13020 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-321200 old-k8s-version-321200
	I0226 11:37:31.076552   13020 network_create.go:108] docker network old-k8s-version-321200 192.168.85.0/24 created
	I0226 11:37:31.076552   13020 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-321200" container
	I0226 11:37:31.098885   13020 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 11:37:31.288206   13020 cli_runner.go:164] Run: docker volume create old-k8s-version-321200 --label name.minikube.sigs.k8s.io=old-k8s-version-321200 --label created_by.minikube.sigs.k8s.io=true
	I0226 11:37:31.557550   13020 oci.go:103] Successfully created a docker volume old-k8s-version-321200
	I0226 11:37:31.567058   13020 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-321200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-321200 --entrypoint /usr/bin/test -v old-k8s-version-321200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 11:37:34.077725   13020 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-321200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-321200 --entrypoint /usr/bin/test -v old-k8s-version-321200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (2.5106502s)
	I0226 11:37:34.078259   13020 oci.go:107] Successfully prepared a docker volume old-k8s-version-321200
	I0226 11:37:34.078259   13020 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 11:37:34.078388   13020 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 11:37:34.092094   13020 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-321200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 11:37:53.478971   13020 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-321200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (19.3865425s)
	I0226 11:37:53.479035   13020 kic.go:203] duration metric: took 19.400521 seconds to extract preloaded images to volume
	I0226 11:37:53.487458   13020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:37:53.841575   13020 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:37:53.802185035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:37:53.852100   13020 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 11:37:54.240496   13020 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-321200 --name old-k8s-version-321200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-321200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-321200 --network old-k8s-version-321200 --ip 192.168.85.2 --volume old-k8s-version-321200:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 11:37:55.158250   13020 cli_runner.go:164] Run: docker container inspect old-k8s-version-321200 --format={{.State.Running}}
	I0226 11:37:55.359207   13020 cli_runner.go:164] Run: docker container inspect old-k8s-version-321200 --format={{.State.Status}}
	I0226 11:37:55.544375   13020 cli_runner.go:164] Run: docker exec old-k8s-version-321200 stat /var/lib/dpkg/alternatives/iptables
	I0226 11:37:55.820382   13020 oci.go:144] the created container "old-k8s-version-321200" has a running status.
	I0226 11:37:55.820382   13020 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa...
	I0226 11:37:56.184625   13020 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 11:37:56.433546   13020 cli_runner.go:164] Run: docker container inspect old-k8s-version-321200 --format={{.State.Status}}
	I0226 11:37:56.631545   13020 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 11:37:56.631545   13020 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-321200 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 11:37:56.898191   13020 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa...
	I0226 11:37:59.561834   13020 cli_runner.go:164] Run: docker container inspect old-k8s-version-321200 --format={{.State.Status}}
	I0226 11:37:59.708851   13020 machine.go:88] provisioning docker machine ...
	I0226 11:37:59.708946   13020 ubuntu.go:169] provisioning hostname "old-k8s-version-321200"
	I0226 11:37:59.717707   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:37:59.900350   13020 main.go:141] libmachine: Using SSH client type: native
	I0226 11:37:59.910285   13020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53922 <nil> <nil>}
	I0226 11:37:59.910285   13020 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-321200 && echo "old-k8s-version-321200" | sudo tee /etc/hostname
	I0226 11:38:00.135430   13020 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-321200
	
	I0226 11:38:00.145901   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:00.331891   13020 main.go:141] libmachine: Using SSH client type: native
	I0226 11:38:00.331891   13020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53922 <nil> <nil>}
	I0226 11:38:00.332895   13020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-321200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-321200/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-321200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 11:38:00.515909   13020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 11:38:00.515909   13020 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0226 11:38:00.515909   13020 ubuntu.go:177] setting up certificates
	I0226 11:38:00.515909   13020 provision.go:83] configureAuth start
	I0226 11:38:00.528912   13020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-321200
	I0226 11:38:00.723175   13020 provision.go:138] copyHostCerts
	I0226 11:38:00.723175   13020 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0226 11:38:00.723175   13020 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0226 11:38:00.723175   13020 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0226 11:38:00.725197   13020 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0226 11:38:00.725197   13020 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0226 11:38:00.725197   13020 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0226 11:38:00.726209   13020 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0226 11:38:00.726209   13020 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0226 11:38:00.726209   13020 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0226 11:38:00.727189   13020 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-321200 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-321200]
	I0226 11:38:01.060946   13020 provision.go:172] copyRemoteCerts
	I0226 11:38:01.076989   13020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 11:38:01.088941   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:01.265940   13020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53922 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:38:01.400941   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0226 11:38:01.441950   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0226 11:38:01.480956   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0226 11:38:01.521937   13020 provision.go:86] duration metric: configureAuth took 1.0060209s
	I0226 11:38:01.521937   13020 ubuntu.go:193] setting minikube options for container-runtime
	I0226 11:38:01.522954   13020 config.go:182] Loaded profile config "old-k8s-version-321200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:38:01.534962   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:01.737953   13020 main.go:141] libmachine: Using SSH client type: native
	I0226 11:38:01.738951   13020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53922 <nil> <nil>}
	I0226 11:38:01.738951   13020 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 11:38:01.935668   13020 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 11:38:01.935668   13020 ubuntu.go:71] root file system type: overlay
	I0226 11:38:01.935668   13020 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 11:38:01.945690   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:02.153684   13020 main.go:141] libmachine: Using SSH client type: native
	I0226 11:38:02.153684   13020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53922 <nil> <nil>}
	I0226 11:38:02.153684   13020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 11:38:02.374754   13020 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 11:38:02.386748   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:02.568736   13020 main.go:141] libmachine: Using SSH client type: native
	I0226 11:38:02.568736   13020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 53922 <nil> <nil>}
	I0226 11:38:02.568736   13020 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 11:38:05.401621   13020 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-26 11:38:02.357686472 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0226 11:38:05.401621   13020 machine.go:91] provisioned docker machine in 5.6926381s
	I0226 11:38:05.401621   13020 client.go:171] LocalClient.Create took 35.9384752s
	I0226 11:38:05.401621   13020 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-321200" took 35.9384752s
	I0226 11:38:05.401621   13020 start.go:300] post-start starting for "old-k8s-version-321200" (driver="docker")
	I0226 11:38:05.401621   13020 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 11:38:05.419631   13020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 11:38:05.433615   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:05.636625   13020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53922 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:38:05.776668   13020 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 11:38:05.788616   13020 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 11:38:05.788616   13020 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 11:38:05.788616   13020 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 11:38:05.788616   13020 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 11:38:05.788616   13020 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0226 11:38:05.788616   13020 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0226 11:38:05.790628   13020 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> 118682.pem in /etc/ssl/certs
	I0226 11:38:05.813622   13020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 11:38:05.839618   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /etc/ssl/certs/118682.pem (1708 bytes)
	I0226 11:38:05.902610   13020 start.go:303] post-start completed in 500.9854ms
	I0226 11:38:05.923616   13020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-321200
	I0226 11:38:06.175615   13020 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\config.json ...
	I0226 11:38:06.202647   13020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:38:06.219618   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:06.493612   13020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53922 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:38:06.638633   13020 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 11:38:06.652633   13020 start.go:128] duration metric: createHost completed in 37.1964565s
	I0226 11:38:06.652633   13020 start.go:83] releasing machines lock for "old-k8s-version-321200", held for 37.1964565s
	I0226 11:38:06.661623   13020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-321200
	I0226 11:38:06.830486   13020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 11:38:06.843622   13020 ssh_runner.go:195] Run: cat /version.json
	I0226 11:38:06.843622   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:06.855627   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:07.061636   13020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53922 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:38:07.064622   13020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53922 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:38:07.198450   13020 ssh_runner.go:195] Run: systemctl --version
	I0226 11:38:07.408443   13020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 11:38:07.436441   13020 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0226 11:38:07.460460   13020 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0226 11:38:07.481467   13020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 11:38:07.541555   13020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 11:38:07.574575   13020 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0226 11:38:07.574575   13020 start.go:475] detecting cgroup driver to use...
	I0226 11:38:07.574575   13020 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:38:07.575568   13020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:38:07.633590   13020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0226 11:38:07.741056   13020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 11:38:07.759384   13020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 11:38:07.775423   13020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 11:38:07.807365   13020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:38:07.841320   13020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 11:38:07.881321   13020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:38:07.912306   13020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 11:38:07.943316   13020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 11:38:07.973334   13020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 11:38:08.011082   13020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 11:38:08.042098   13020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:38:08.206136   13020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 11:38:08.367303   13020 start.go:475] detecting cgroup driver to use...
	I0226 11:38:08.367474   13020 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:38:08.383423   13020 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 11:38:08.409972   13020 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 11:38:08.424852   13020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 11:38:08.449692   13020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:38:08.513148   13020 ssh_runner.go:195] Run: which cri-dockerd
	I0226 11:38:08.542580   13020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 11:38:08.561582   13020 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 11:38:08.611788   13020 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 11:38:08.797625   13020 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 11:38:09.026841   13020 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 11:38:09.027696   13020 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 11:38:09.105424   13020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:38:09.275098   13020 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:38:10.172471   13020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:38:10.233718   13020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:38:10.287924   13020 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0226 11:38:10.297930   13020 cli_runner.go:164] Run: docker exec -t old-k8s-version-321200 dig +short host.docker.internal
	I0226 11:38:10.567517   13020 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 11:38:10.578525   13020 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 11:38:10.587538   13020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:38:10.616528   13020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:38:10.789613   13020 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 11:38:10.798627   13020 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:38:10.843630   13020 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 11:38:10.843630   13020 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 11:38:10.855618   13020 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 11:38:10.887619   13020 ssh_runner.go:195] Run: which lz4
	I0226 11:38:10.919502   13020 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0226 11:38:10.934066   13020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 11:38:10.934066   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0226 11:38:25.066982   13020 docker.go:649] Took 14.165681 seconds to copy over tarball
	I0226 11:38:25.082408   13020 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 11:38:28.739221   13020 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.6567881s)
	I0226 11:38:28.739221   13020 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 11:38:28.823755   13020 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 11:38:28.843943   13020 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0226 11:38:28.896994   13020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:38:29.064408   13020 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:38:36.506032   13020 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.4415746s)
	I0226 11:38:36.518562   13020 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:38:36.566515   13020 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 11:38:36.566515   13020 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 11:38:36.567495   13020 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 11:38:36.585504   13020 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:38:36.592519   13020 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:38:36.601510   13020 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0226 11:38:36.602506   13020 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:38:36.603496   13020 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:38:36.607518   13020 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:38:36.609495   13020 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:38:36.611496   13020 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0226 11:38:36.612506   13020 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:38:36.617496   13020 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0226 11:38:36.618509   13020 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:38:36.619497   13020 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:38:36.625549   13020 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0226 11:38:36.625549   13020 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:38:36.626521   13020 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:38:36.631499   13020 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0226 11:38:36.702774   13020 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:38:36.795649   13020 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:38:36.889951   13020 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:38:36.984090   13020 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:38:37.061552   13020 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:38:37.137804   13020 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0226 11:38:37.148591   13020 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	W0226 11:38:37.152495   13020 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:38:37.194168   13020 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:38:37.195156   13020 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0226 11:38:37.195156   13020 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0226 11:38:37.195156   13020 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0226 11:38:37.198164   13020 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0226 11:38:37.202163   13020 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0226 11:38:37.202163   13020 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0226 11:38:37.202163   13020 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:38:37.207154   13020 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0226 11:38:37.213159   13020 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:38:37.240182   13020 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	W0226 11:38:37.261914   13020 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:38:37.279819   13020 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0226 11:38:37.279819   13020 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0226 11:38:37.279944   13020 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:38:37.284158   13020 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0226 11:38:37.284158   13020 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0226 11:38:37.284158   13020 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:38:37.293167   13020 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:38:37.299165   13020 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	W0226 11:38:37.355722   13020 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:38:37.398527   13020 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0226 11:38:37.398527   13020 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0226 11:38:37.398527   13020 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0226 11:38:37.398527   13020 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0226 11:38:37.398527   13020 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0226 11:38:37.398527   13020 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0226 11:38:37.398527   13020 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0226 11:38:37.409946   13020 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0226 11:38:37.427007   13020 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:38:37.456903   13020 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0226 11:38:37.464890   13020 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0226 11:38:37.464890   13020 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0226 11:38:37.464890   13020 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:38:37.472893   13020 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:38:37.490505   13020 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:38:37.522312   13020 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0226 11:38:37.531011   13020 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0226 11:38:37.531011   13020 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0226 11:38:37.531011   13020 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:38:37.540382   13020 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:38:37.585043   13020 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0226 11:38:37.618072   13020 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:38:37.661904   13020 cache_images.go:92] LoadImages completed in 1.0944023s
	W0226 11:38:37.661904   13020 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2: The system cannot find the file specified.
	I0226 11:38:37.671460   13020 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 11:38:37.796708   13020 cni.go:84] Creating CNI manager for ""
	I0226 11:38:37.796708   13020 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 11:38:37.796708   13020 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:38:37.796708   13020 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-321200 NodeName:old-k8s-version-321200 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 11:38:37.797695   13020 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-321200"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-321200
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:38:37.797695   13020 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-321200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:38:37.811687   13020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0226 11:38:37.832695   13020 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:38:37.848692   13020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:38:37.871120   13020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0226 11:38:37.904820   13020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 11:38:37.938994   13020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0226 11:38:37.986625   13020 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:38:37.997548   13020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:38:38.021918   13020 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200 for IP: 192.168.85.2
	I0226 11:38:38.021918   13020 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:38:38.022930   13020 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 11:38:38.022930   13020 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0226 11:38:38.023941   13020 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\client.key
	I0226 11:38:38.023941   13020 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\client.crt with IP's: []
	I0226 11:38:38.552741   13020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\client.crt ...
	I0226 11:38:38.552741   13020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\client.crt: {Name:mkc10b9454430202ee9b52b1b3a4bf1db8bde8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:38:38.553747   13020 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\client.key ...
	I0226 11:38:38.553747   13020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\client.key: {Name:mkd2851ed06bcc7423400a7cc65c8a39303a61a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:38:38.554746   13020 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key.43b9df8c
	I0226 11:38:38.555743   13020 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 11:38:38.865635   13020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.crt.43b9df8c ...
	I0226 11:38:38.865635   13020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.crt.43b9df8c: {Name:mk87cf47668463d77788305b53e03d33c545d62d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:38:38.866633   13020 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key.43b9df8c ...
	I0226 11:38:38.866633   13020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key.43b9df8c: {Name:mkb393301250e736865fc24d2ddbc2c3eb717e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:38:38.867638   13020 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.crt.43b9df8c -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.crt
	I0226 11:38:38.880636   13020 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key.43b9df8c -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key
	I0226 11:38:38.882624   13020 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.key
	I0226 11:38:38.882624   13020 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.crt with IP's: []
	I0226 11:38:39.037827   13020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.crt ...
	I0226 11:38:39.037827   13020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.crt: {Name:mk0fdff0dd32fcc66652ca8a006350ee86abb808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:38:39.039836   13020 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.key ...
	I0226 11:38:39.039836   13020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.key: {Name:mk4523bd340bd2820fcca03b3bedb4a0130ae518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:38:39.054825   13020 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 11:38:39.054990   13020 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 11:38:39.055340   13020 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 11:38:39.055340   13020 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 11:38:39.056004   13020 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 11:38:39.056213   13020 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 11:38:39.056521   13020 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 11:38:39.059014   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:38:39.102944   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 11:38:39.149532   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:38:39.191329   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 11:38:39.235398   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:38:39.271391   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:38:39.310400   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:38:39.352398   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:38:39.399408   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 11:38:39.449689   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:38:39.497664   13020 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 11:38:39.538701   13020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:38:39.579698   13020 ssh_runner.go:195] Run: openssl version
	I0226 11:38:39.606676   13020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:38:39.639686   13020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:38:39.650665   13020 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:38:39.663678   13020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:38:39.689669   13020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:38:39.724692   13020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 11:38:39.754669   13020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 11:38:39.769486   13020 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 11:38:39.780459   13020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 11:38:39.815441   13020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 11:38:39.849442   13020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 11:38:39.879442   13020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 11:38:39.889442   13020 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 11:38:39.902453   13020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 11:38:39.933456   13020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 11:38:39.966447   13020 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:38:39.974446   13020 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 11:38:39.975469   13020 kubeadm.go:404] StartCluster: {Name:old-k8s-version-321200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:38:39.985454   13020 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 11:38:40.057444   13020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:38:40.095496   13020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 11:38:40.118459   13020 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:38:40.133457   13020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:38:40.154451   13020 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:38:40.154451   13020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:38:40.548456   13020 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 11:38:40.548456   13020 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 11:38:40.667468   13020 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 11:38:40.875453   13020 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:42:45.267822   13020 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 11:42:45.268737   13020 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 11:42:45.277790   13020 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 11:42:45.278059   13020 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:42:45.278453   13020 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:42:45.278453   13020 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:42:45.279005   13020 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:42:45.279567   13020 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:42:45.280026   13020 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:42:45.280323   13020 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 11:42:45.280588   13020 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:42:45.287364   13020 out.go:204]   - Generating certificates and keys ...
	I0226 11:42:45.287670   13020 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:42:45.287670   13020 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:42:45.287670   13020 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 11:42:45.287670   13020 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 11:42:45.288319   13020 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 11:42:45.288439   13020 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 11:42:45.288439   13020 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 11:42:45.288439   13020 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-321200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0226 11:42:45.288439   13020 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 11:42:45.288439   13020 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-321200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0226 11:42:45.288439   13020 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 11:42:45.289545   13020 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 11:42:45.289658   13020 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 11:42:45.289658   13020 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:42:45.289658   13020 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:42:45.289658   13020 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:42:45.289658   13020 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:42:45.290367   13020 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:42:45.290535   13020 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:42:45.295093   13020 out.go:204]   - Booting up control plane ...
	I0226 11:42:45.295617   13020 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:42:45.296023   13020 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:42:45.296442   13020 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:42:45.296586   13020 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:42:45.297180   13020 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:42:45.297354   13020 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 11:42:45.297408   13020 kubeadm.go:322] 
	I0226 11:42:45.297618   13020 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 11:42:45.297776   13020 kubeadm.go:322] 	timed out waiting for the condition
	I0226 11:42:45.297776   13020 kubeadm.go:322] 
	I0226 11:42:45.297776   13020 kubeadm.go:322] This error is likely caused by:
	I0226 11:42:45.297776   13020 kubeadm.go:322] 	- The kubelet is not running
	I0226 11:42:45.298363   13020 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 11:42:45.298363   13020 kubeadm.go:322] 
	I0226 11:42:45.298570   13020 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 11:42:45.298570   13020 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 11:42:45.298570   13020 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 11:42:45.299017   13020 kubeadm.go:322] 
	I0226 11:42:45.299109   13020 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 11:42:45.299662   13020 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 11:42:45.299849   13020 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 11:42:45.299849   13020 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 11:42:45.299849   13020 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 11:42:45.300520   13020 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0226 11:42:45.300742   13020 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-321200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-321200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-321200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-321200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 11:42:45.300970   13020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 11:42:46.800127   13020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.4991475s)
	I0226 11:42:46.812968   13020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:42:46.837454   13020 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:42:46.848440   13020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:42:46.868418   13020 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:42:46.868418   13020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:42:47.192595   13020 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 11:42:47.193185   13020 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 11:42:47.302879   13020 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 11:42:47.497797   13020 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:46:49.937869   13020 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 11:46:49.938878   13020 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 11:46:49.943855   13020 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 11:46:49.944864   13020 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:46:49.944864   13020 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:46:49.945873   13020 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:46:49.945873   13020 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:46:49.945873   13020 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:46:49.945873   13020 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:46:49.945873   13020 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 11:46:49.946865   13020 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:46:49.948855   13020 out.go:204]   - Generating certificates and keys ...
	I0226 11:46:49.949889   13020 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:46:49.949889   13020 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:46:49.949889   13020 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 11:46:49.949889   13020 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 11:46:49.950851   13020 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 11:46:49.950851   13020 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 11:46:49.950851   13020 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 11:46:49.950851   13020 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 11:46:49.951852   13020 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 11:46:49.951852   13020 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 11:46:49.951852   13020 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 11:46:49.951852   13020 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:46:49.951852   13020 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:46:49.952866   13020 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:46:49.952866   13020 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:46:49.952866   13020 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:46:49.952866   13020 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:46:49.955880   13020 out.go:204]   - Booting up control plane ...
	I0226 11:46:49.955880   13020 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:46:49.955880   13020 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:46:49.956875   13020 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:46:49.956875   13020 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:46:49.956875   13020 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:46:49.957856   13020 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 11:46:49.957856   13020 kubeadm.go:322] 
	I0226 11:46:49.957856   13020 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 11:46:49.957856   13020 kubeadm.go:322] 	timed out waiting for the condition
	I0226 11:46:49.957856   13020 kubeadm.go:322] 
	I0226 11:46:49.957856   13020 kubeadm.go:322] This error is likely caused by:
	I0226 11:46:49.957856   13020 kubeadm.go:322] 	- The kubelet is not running
	I0226 11:46:49.957856   13020 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 11:46:49.957856   13020 kubeadm.go:322] 
	I0226 11:46:49.958865   13020 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 11:46:49.958865   13020 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 11:46:49.958865   13020 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 11:46:49.958865   13020 kubeadm.go:322] 
	I0226 11:46:49.958865   13020 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 11:46:49.959853   13020 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 11:46:49.959853   13020 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 11:46:49.959853   13020 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 11:46:49.959853   13020 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 11:46:49.959853   13020 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 11:46:49.959853   13020 kubeadm.go:406] StartCluster complete in 8m9.9810643s
	I0226 11:46:49.973872   13020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:46:50.032850   13020 logs.go:276] 0 containers: []
	W0226 11:46:50.032850   13020 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:46:50.045860   13020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:46:50.109870   13020 logs.go:276] 0 containers: []
	W0226 11:46:50.109870   13020 logs.go:278] No container was found matching "etcd"
	I0226 11:46:50.121862   13020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:46:50.178904   13020 logs.go:276] 0 containers: []
	W0226 11:46:50.178904   13020 logs.go:278] No container was found matching "coredns"
	I0226 11:46:50.194854   13020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:46:50.245871   13020 logs.go:276] 0 containers: []
	W0226 11:46:50.245871   13020 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:46:50.258863   13020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:46:50.324856   13020 logs.go:276] 0 containers: []
	W0226 11:46:50.324856   13020 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:46:50.339856   13020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:46:50.388884   13020 logs.go:276] 0 containers: []
	W0226 11:46:50.388884   13020 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:46:50.406875   13020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:46:50.458860   13020 logs.go:276] 0 containers: []
	W0226 11:46:50.458860   13020 logs.go:278] No container was found matching "kindnet"
	I0226 11:46:50.458860   13020 logs.go:123] Gathering logs for kubelet ...
	I0226 11:46:50.458860   13020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:46:50.513169   13020 logs.go:138] Found kubelet problem: Feb 26 11:46:27 old-k8s-version-321200 kubelet[5770]: E0226 11:46:27.311901    5770 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:46:50.519181   13020 logs.go:138] Found kubelet problem: Feb 26 11:46:28 old-k8s-version-321200 kubelet[5770]: E0226 11:46:28.306593    5770 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:46:50.546168   13020 logs.go:138] Found kubelet problem: Feb 26 11:46:35 old-k8s-version-321200 kubelet[5770]: E0226 11:46:35.300820    5770 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:46:50.556174   13020 logs.go:138] Found kubelet problem: Feb 26 11:46:38 old-k8s-version-321200 kubelet[5770]: E0226 11:46:38.315402    5770 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:46:50.564156   13020 logs.go:138] Found kubelet problem: Feb 26 11:46:40 old-k8s-version-321200 kubelet[5770]: E0226 11:46:40.314492    5770 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:46:50.569186   13020 logs.go:138] Found kubelet problem: Feb 26 11:46:41 old-k8s-version-321200 kubelet[5770]: E0226 11:46:41.306980    5770 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:46:50.600156   13020 logs.go:138] Found kubelet problem: Feb 26 11:46:50 old-k8s-version-321200 kubelet[5770]: E0226 11:46:50.326095    5770 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:46:50.600156   13020 logs.go:123] Gathering logs for dmesg ...
	I0226 11:46:50.600156   13020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:46:50.635152   13020 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:46:50.635152   13020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:46:50.795174   13020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:46:50.795174   13020 logs.go:123] Gathering logs for Docker ...
	I0226 11:46:50.796171   13020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:46:50.840153   13020 logs.go:123] Gathering logs for container status ...
	I0226 11:46:50.841159   13020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0226 11:46:50.937163   13020 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 11:46:50.937163   13020 out.go:239] * 
	* 
	W0226 11:46:50.938162   13020 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 11:46:50.938162   13020 out.go:239] * 
	* 
	W0226 11:46:50.940156   13020 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 11:46:50.949185   13020 out.go:177] X Problems detected in kubelet:
	I0226 11:46:50.957162   13020 out.go:177]   Feb 26 11:46:27 old-k8s-version-321200 kubelet[5770]: E0226 11:46:27.311901    5770 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0226 11:46:50.967174   13020 out.go:177]   Feb 26 11:46:28 old-k8s-version-321200 kubelet[5770]: E0226 11:46:28.306593    5770 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:46:50.974167   13020 out.go:177]   Feb 26 11:46:35 old-k8s-version-321200 kubelet[5770]: E0226 11:46:35.300820    5770 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:46:50.981157   13020 out.go:177] 
	W0226 11:46:50.983174   13020 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 11:46:50.984168   13020 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 11:46:50.984168   13020 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 11:46:50.989193   13020 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p old-k8s-version-321200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-321200
helpers_test.go:235: (dbg) docker inspect old-k8s-version-321200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242",
	        "Created": "2024-02-26T11:37:54.399460536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:37:55.093738611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hosts",
	        "LogPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242-json.log",
	        "Name": "/old-k8s-version-321200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-321200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-321200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-321200",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-321200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-321200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e04b780161b04981012972b68d9d29013b961dd206bbd400cf1ac47fbc23ac5",
	            "SandboxKey": "/var/run/docker/netns/9e04b780161b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53922"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53923"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53924"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53920"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-321200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e96dc767099",
	                        "old-k8s-version-321200"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "3d8e32e292076657fa3147b08ea4473653a270d339de0a1d187a6074718ce682",
	                    "EndpointID": "b8d4effe5d6ef0232e714e967b8757ec4a8fd16869940ba367ed00ad3d8a7ad3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-321200",
	                        "9e96dc767099"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 6 (1.4500561s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:46:52.079577    3044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0226 11:46:53.281656    3044 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-321200" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-321200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (565.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-321200 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-321200 create -f testdata\busybox.yaml: exit status 1 (117.1287ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-321200" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-321200 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-321200
helpers_test.go:235: (dbg) docker inspect old-k8s-version-321200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242",
	        "Created": "2024-02-26T11:37:54.399460536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:37:55.093738611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hosts",
	        "LogPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242-json.log",
	        "Name": "/old-k8s-version-321200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-321200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-321200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-321200",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-321200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-321200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e04b780161b04981012972b68d9d29013b961dd206bbd400cf1ac47fbc23ac5",
	            "SandboxKey": "/var/run/docker/netns/9e04b780161b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53922"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53923"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53924"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53920"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-321200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e96dc767099",
	                        "old-k8s-version-321200"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "3d8e32e292076657fa3147b08ea4473653a270d339de0a1d187a6074718ce682",
	                    "EndpointID": "b8d4effe5d6ef0232e714e967b8757ec4a8fd16869940ba367ed00ad3d8a7ad3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-321200",
	                        "9e96dc767099"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 6 (1.2582444s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:46:53.790051    1204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0226 11:46:54.859070    1204 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-321200" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-321200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-321200
helpers_test.go:235: (dbg) docker inspect old-k8s-version-321200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242",
	        "Created": "2024-02-26T11:37:54.399460536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:37:55.093738611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hosts",
	        "LogPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242-json.log",
	        "Name": "/old-k8s-version-321200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-321200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-321200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-321200",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-321200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-321200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e04b780161b04981012972b68d9d29013b961dd206bbd400cf1ac47fbc23ac5",
	            "SandboxKey": "/var/run/docker/netns/9e04b780161b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53922"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53923"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53924"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53920"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-321200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e96dc767099",
	                        "old-k8s-version-321200"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "3d8e32e292076657fa3147b08ea4473653a270d339de0a1d187a6074718ce682",
	                    "EndpointID": "b8d4effe5d6ef0232e714e967b8757ec4a8fd16869940ba367ed00ad3d8a7ad3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-321200",
	                        "9e96dc767099"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 6 (1.2852961s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:46:55.254814    7616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0226 11:46:56.340195    7616 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-321200" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-321200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (3.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (100.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-321200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-321200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m38.1160281s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:46:56.510061    1964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_addons_439019a81e20ad064ebb72ced3e20f3355766968_4.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-321200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-321200 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-321200 describe deploy/metrics-server -n kube-system: exit status 1 (170.9913ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-321200" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-321200 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-321200
helpers_test.go:235: (dbg) docker inspect old-k8s-version-321200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242",
	        "Created": "2024-02-26T11:37:54.399460536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:37:55.093738611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hosts",
	        "LogPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242-json.log",
	        "Name": "/old-k8s-version-321200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-321200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-321200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-321200",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-321200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-321200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e04b780161b04981012972b68d9d29013b961dd206bbd400cf1ac47fbc23ac5",
	            "SandboxKey": "/var/run/docker/netns/9e04b780161b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53922"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53923"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53924"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53920"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-321200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e96dc767099",
	                        "old-k8s-version-321200"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "3d8e32e292076657fa3147b08ea4473653a270d339de0a1d187a6074718ce682",
	                    "EndpointID": "b8d4effe5d6ef0232e714e967b8757ec4a8fd16869940ba367ed00ad3d8a7ad3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-321200",
	                        "9e96dc767099"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 6 (1.8425352s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:48:35.132687    7360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0226 11:48:36.716408    7360 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-321200" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-321200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (100.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (807.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-321200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-321200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 109 (13m21.3770304s)

                                                
                                                
-- stdout --
	* [old-k8s-version-321200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-321200 in cluster old-k8s-version-321200
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Restarting existing docker container for "old-k8s-version-321200" ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 26 12:01:39 old-k8s-version-321200 kubelet[11360]: E0226 12:01:39.083157   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.054143   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.055459   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:48:42.088514   10808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 11:48:42.182518   10808 out.go:291] Setting OutFile to fd 584 ...
	I0226 11:48:42.183531   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:48:42.183531   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:48:42.183531   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:48:42.212514   10808 out.go:298] Setting JSON to false
	I0226 11:48:42.216511   10808 start.go:129] hostinfo: {"hostname":"minikube7","uptime":5398,"bootTime":1708942723,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 11:48:42.216511   10808 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 11:48:42.220543   10808 out.go:177] * [old-k8s-version-321200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 11:48:42.223519   10808 notify.go:220] Checking for updates...
	I0226 11:48:42.225511   10808 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:48:42.227556   10808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:48:42.230520   10808 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 11:48:42.233526   10808 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:48:42.236530   10808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:48:42.240523   10808 config.go:182] Loaded profile config "old-k8s-version-321200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:48:42.243513   10808 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0226 11:48:42.246534   10808 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:48:42.584522   10808 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 11:48:42.598520   10808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:48:43.030704   10808 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:48:42.985176438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:48:43.033706   10808 out.go:177] * Using the docker driver based on existing profile
	I0226 11:48:43.036701   10808 start.go:299] selected driver: docker
	I0226 11:48:43.036701   10808 start.go:903] validating driver "docker" against &{Name:old-k8s-version-321200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:48:43.036701   10808 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:48:43.116849   10808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:48:43.555637   10808 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:48:43.509953999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:48:43.556679   10808 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:48:43.556679   10808 cni.go:84] Creating CNI manager for ""
	I0226 11:48:43.556679   10808 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 11:48:43.556679   10808 start_flags.go:323] config:
	{Name:old-k8s-version-321200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:48:43.560634   10808 out.go:177] * Starting control plane node old-k8s-version-321200 in cluster old-k8s-version-321200
	I0226 11:48:43.563645   10808 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 11:48:43.565639   10808 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:48:43.568636   10808 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 11:48:43.568636   10808 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:48:43.568636   10808 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 11:48:43.568636   10808 cache.go:56] Caching tarball of preloaded images
	I0226 11:48:43.569656   10808 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 11:48:43.569656   10808 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0226 11:48:43.569656   10808 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\config.json ...
	I0226 11:48:43.791890   10808 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:48:43.791890   10808 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:48:43.791890   10808 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:48:43.791890   10808 start.go:365] acquiring machines lock for old-k8s-version-321200: {Name:mk55b0c0259e24b9978efabbdf90c76106474ba0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:48:43.791890   10808 start.go:369] acquired machines lock for "old-k8s-version-321200" in 0s
	I0226 11:48:43.791890   10808 start.go:96] Skipping create...Using existing machine configuration
	I0226 11:48:43.791890   10808 fix.go:54] fixHost starting: 
	I0226 11:48:43.823895   10808 cli_runner.go:164] Run: docker container inspect old-k8s-version-321200 --format={{.State.Status}}
	I0226 11:48:44.030860   10808 fix.go:102] recreateIfNeeded on old-k8s-version-321200: state=Stopped err=<nil>
	W0226 11:48:44.030860   10808 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 11:48:44.035871   10808 out.go:177] * Restarting existing docker container for "old-k8s-version-321200" ...
	I0226 11:48:44.050868   10808 cli_runner.go:164] Run: docker start old-k8s-version-321200
	I0226 11:48:45.018120   10808 cli_runner.go:164] Run: docker container inspect old-k8s-version-321200 --format={{.State.Status}}
	I0226 11:48:45.271088   10808 kic.go:430] container "old-k8s-version-321200" state is running.
	I0226 11:48:45.292106   10808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-321200
	I0226 11:48:45.523107   10808 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\config.json ...
	I0226 11:48:45.526097   10808 machine.go:88] provisioning docker machine ...
	I0226 11:48:45.526097   10808 ubuntu.go:169] provisioning hostname "old-k8s-version-321200"
	I0226 11:48:45.537099   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:45.770103   10808 main.go:141] libmachine: Using SSH client type: native
	I0226 11:48:45.771116   10808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 54515 <nil> <nil>}
	I0226 11:48:45.771116   10808 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-321200 && echo "old-k8s-version-321200" | sudo tee /etc/hostname
	I0226 11:48:45.775113   10808 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0226 11:48:49.003608   10808 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-321200
	
	I0226 11:48:49.016590   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:49.220934   10808 main.go:141] libmachine: Using SSH client type: native
	I0226 11:48:49.220934   10808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 54515 <nil> <nil>}
	I0226 11:48:49.221941   10808 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-321200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-321200/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-321200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 11:48:49.414558   10808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 11:48:49.414558   10808 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0226 11:48:49.414558   10808 ubuntu.go:177] setting up certificates
	I0226 11:48:49.414558   10808 provision.go:83] configureAuth start
	I0226 11:48:49.427450   10808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-321200
	I0226 11:48:49.608391   10808 provision.go:138] copyHostCerts
	I0226 11:48:49.608391   10808 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0226 11:48:49.608391   10808 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0226 11:48:49.609361   10808 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0226 11:48:49.611290   10808 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0226 11:48:49.611365   10808 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0226 11:48:49.611802   10808 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0226 11:48:49.613631   10808 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0226 11:48:49.613631   10808 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0226 11:48:49.614165   10808 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0226 11:48:49.615715   10808 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-321200 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-321200]
	I0226 11:48:49.946356   10808 provision.go:172] copyRemoteCerts
	I0226 11:48:49.958032   10808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 11:48:49.970161   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:50.170567   10808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54515 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:48:50.316157   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0226 11:48:50.369876   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0226 11:48:50.413877   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 11:48:50.454885   10808 provision.go:86] duration metric: configureAuth took 1.04032s
	I0226 11:48:50.454885   10808 ubuntu.go:193] setting minikube options for container-runtime
	I0226 11:48:50.454885   10808 config.go:182] Loaded profile config "old-k8s-version-321200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:48:50.464877   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:50.667081   10808 main.go:141] libmachine: Using SSH client type: native
	I0226 11:48:50.667081   10808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 54515 <nil> <nil>}
	I0226 11:48:50.667081   10808 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 11:48:50.891644   10808 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 11:48:50.891705   10808 ubuntu.go:71] root file system type: overlay
	I0226 11:48:50.892109   10808 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 11:48:50.909193   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:51.119614   10808 main.go:141] libmachine: Using SSH client type: native
	I0226 11:48:51.119614   10808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 54515 <nil> <nil>}
	I0226 11:48:51.119614   10808 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 11:48:51.353101   10808 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 11:48:51.372370   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:51.570511   10808 main.go:141] libmachine: Using SSH client type: native
	I0226 11:48:51.571500   10808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 54515 <nil> <nil>}
	I0226 11:48:51.571500   10808 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 11:48:51.765834   10808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 11:48:51.765834   10808 machine.go:91] provisioned docker machine in 6.2396944s
	I0226 11:48:51.765834   10808 start.go:300] post-start starting for "old-k8s-version-321200" (driver="docker")
	I0226 11:48:51.765834   10808 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 11:48:51.784383   10808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 11:48:51.792375   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:51.990001   10808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54515 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:48:52.161666   10808 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 11:48:52.175859   10808 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 11:48:52.175859   10808 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 11:48:52.175859   10808 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 11:48:52.175859   10808 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 11:48:52.175859   10808 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0226 11:48:52.176982   10808 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0226 11:48:52.178561   10808 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> 118682.pem in /etc/ssl/certs
	I0226 11:48:52.200940   10808 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 11:48:52.216890   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /etc/ssl/certs/118682.pem (1708 bytes)
	I0226 11:48:52.257881   10808 start.go:303] post-start completed in 492.0437ms
	I0226 11:48:52.270893   10808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:48:52.284956   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:52.481527   10808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54515 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:48:52.633140   10808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 11:48:52.650120   10808 fix.go:56] fixHost completed within 8.8581707s
	I0226 11:48:52.650120   10808 start.go:83] releasing machines lock for "old-k8s-version-321200", held for 8.8581707s
	I0226 11:48:52.661114   10808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-321200
	I0226 11:48:52.837722   10808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 11:48:52.848709   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:52.848709   10808 ssh_runner.go:195] Run: cat /version.json
	I0226 11:48:52.857723   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:53.057439   10808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54515 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:48:53.072435   10808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54515 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-321200\id_rsa Username:docker}
	I0226 11:48:53.213426   10808 ssh_runner.go:195] Run: systemctl --version
	I0226 11:48:53.445176   10808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0226 11:48:53.456182   10808 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0226 11:48:53.474189   10808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 11:48:53.516187   10808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 11:48:53.534185   10808 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0226 11:48:53.534185   10808 start.go:475] detecting cgroup driver to use...
	I0226 11:48:53.535213   10808 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:48:53.535213   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:48:53.583195   10808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0226 11:48:53.627177   10808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 11:48:53.649183   10808 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 11:48:53.663182   10808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 11:48:53.711201   10808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:48:53.743184   10808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 11:48:53.781201   10808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:48:53.823184   10808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 11:48:53.853185   10808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 11:48:53.891218   10808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 11:48:53.928210   10808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 11:48:53.961186   10808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:48:54.198165   10808 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 11:48:54.483797   10808 start.go:475] detecting cgroup driver to use...
	I0226 11:48:54.484153   10808 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:48:54.507139   10808 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 11:48:54.533147   10808 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 11:48:54.550135   10808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 11:48:54.574572   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:48:54.624570   10808 ssh_runner.go:195] Run: which cri-dockerd
	I0226 11:48:54.646579   10808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 11:48:54.670939   10808 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 11:48:54.722097   10808 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 11:48:54.936651   10808 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 11:48:55.128358   10808 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 11:48:55.128358   10808 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 11:48:55.198353   10808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:48:55.452162   10808 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:48:56.919530   10808 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4673577s)
	I0226 11:48:56.933505   10808 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:48:57.051506   10808 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:48:57.140568   10808 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0226 11:48:57.157510   10808 cli_runner.go:164] Run: docker exec -t old-k8s-version-321200 dig +short host.docker.internal
	I0226 11:48:57.579524   10808 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 11:48:57.603516   10808 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 11:48:57.617507   10808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:48:57.678513   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:48:57.961553   10808 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 11:48:57.979514   10808 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:48:58.031505   10808 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 11:48:58.031505   10808 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 11:48:58.052505   10808 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 11:48:58.115528   10808 ssh_runner.go:195] Run: which lz4
	I0226 11:48:58.152512   10808 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0226 11:48:58.164502   10808 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 11:48:58.164502   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0226 11:49:13.959181   10808 docker.go:649] Took 15.826555 seconds to copy over tarball
	I0226 11:49:13.972530   10808 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 11:49:18.882565   10808 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.9099043s)
	I0226 11:49:18.882626   10808 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 11:49:19.005355   10808 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 11:49:19.031851   10808 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0226 11:49:19.098086   10808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:49:19.253983   10808 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:49:26.240784   10808 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.9867532s)
	I0226 11:49:26.259769   10808 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:49:26.316774   10808 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 11:49:26.316774   10808 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 11:49:26.316774   10808 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 11:49:26.334769   10808 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0226 11:49:26.334769   10808 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:49:26.341777   10808 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:49:26.345785   10808 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0226 11:49:26.346777   10808 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:49:26.350774   10808 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:49:26.352820   10808 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:49:26.355793   10808 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:26.356777   10808 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:49:26.356777   10808 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0226 11:49:26.370771   10808 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:49:26.375786   10808 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:49:26.375786   10808 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:49:26.375786   10808 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0226 11:49:26.384789   10808 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:26.391786   10808 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	W0226 11:49:26.477938   10808 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:49:26.586859   10808 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:49:26.699860   10808 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:26.780877   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0226 11:49:26.823540   10808 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:49:26.931718   10808 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:27.016651   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	W0226 11:49:27.040340   10808 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:27.040340   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0226 11:49:27.071330   10808 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0226 11:49:27.071330   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0226 11:49:27.071330   10808 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:49:27.082335   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:49:27.087335   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:49:27.088346   10808 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0226 11:49:27.088346   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0226 11:49:27.088346   10808 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0226 11:49:27.102349   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0226 11:49:27.137366   10808 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0226 11:49:27.137366   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0226 11:49:27.137366   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0226 11:49:27.137366   10808 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:49:27.152331   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	W0226 11:49:27.168361   10808 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:27.171352   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0226 11:49:27.183437   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0226 11:49:27.207331   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0226 11:49:27.227343   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0226 11:49:27.229344   10808 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0226 11:49:27.229344   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0226 11:49:27.229344   10808 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:49:27.238351   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	W0226 11:49:27.277383   10808 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:27.279358   10808 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0226 11:49:27.279358   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0226 11:49:27.279358   10808 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0226 11:49:27.287365   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0226 11:49:27.297358   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0226 11:49:27.337663   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0226 11:49:27.387729   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:27.439695   10808 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0226 11:49:27.439695   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0226 11:49:27.439695   10808 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:27.451656   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:27.497822   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:49:27.509824   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0226 11:49:27.547095   10808 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0226 11:49:27.547191   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0226 11:49:27.547191   10808 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:49:27.559142   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:49:27.605204   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0226 11:49:27.605744   10808 cache_images.go:92] LoadImages completed in 1.2889611s
	W0226 11:49:27.605904   10808 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0: The system cannot find the file specified.
	I0226 11:49:27.617726   10808 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 11:49:27.740360   10808 cni.go:84] Creating CNI manager for ""
	I0226 11:49:27.740545   10808 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 11:49:27.740545   10808 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:49:27.740545   10808 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-321200 NodeName:old-k8s-version-321200 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 11:49:27.741026   10808 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-321200"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-321200
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:49:27.741182   10808 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-321200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:49:27.756833   10808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0226 11:49:27.788093   10808 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:49:27.810138   10808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:49:27.827591   10808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0226 11:49:27.878736   10808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 11:49:27.923417   10808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0226 11:49:27.983829   10808 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:49:28.001749   10808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:49:28.027125   10808 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200 for IP: 192.168.85.2
	I0226 11:49:28.027125   10808 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:49:28.028286   10808 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 11:49:28.028583   10808 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0226 11:49:28.029411   10808 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\client.key
	I0226 11:49:28.029568   10808 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key.43b9df8c
	I0226 11:49:28.030200   10808 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.key
	I0226 11:49:28.032221   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 11:49:28.032561   10808 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 11:49:28.032716   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 11:49:28.033076   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 11:49:28.033403   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 11:49:28.033778   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 11:49:28.034119   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 11:49:28.035509   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:49:28.096154   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 11:49:28.138515   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:49:28.193749   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 11:49:28.235981   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:49:28.281733   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:49:28.334907   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:49:28.383693   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:49:28.434967   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 11:49:28.481334   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:49:28.531716   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 11:49:28.579405   10808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:49:28.629660   10808 ssh_runner.go:195] Run: openssl version
	I0226 11:49:28.657958   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:49:28.700636   10808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:49:28.712851   10808 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:49:28.730793   10808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:49:28.761261   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:49:28.803910   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 11:49:28.835047   10808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 11:49:28.849201   10808 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 11:49:28.868679   10808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 11:49:28.912878   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 11:49:28.945100   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 11:49:28.986231   10808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 11:49:28.999235   10808 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 11:49:29.011225   10808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 11:49:29.042612   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 11:49:29.085409   10808 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:49:29.111788   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 11:49:29.139514   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 11:49:29.186580   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 11:49:29.214568   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 11:49:29.247784   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 11:49:29.283273   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 11:49:29.302458   10808 kubeadm.go:404] StartCluster: {Name:old-k8s-version-321200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:49:29.314447   10808 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 11:49:29.369711   10808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:49:29.394689   10808 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 11:49:29.394689   10808 kubeadm.go:636] restartCluster start
	I0226 11:49:29.406665   10808 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 11:49:29.436804   10808 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:29.447458   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:49:29.613505   10808 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-321200" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:49:29.614511   10808 kubeconfig.go:146] "old-k8s-version-321200" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0226 11:49:29.615519   10808 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:49:29.643510   10808 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 11:49:29.662359   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:29.684123   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:29.707405   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:30.175052   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:30.189559   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:30.211752   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:30.676011   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:30.697044   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:30.723029   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:31.177297   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:31.195270   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:31.214271   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:31.663170   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:31.682175   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:31.717207   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:32.168316   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:32.186519   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:32.301036   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:32.671711   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:32.688697   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:32.715688   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:33.171906   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:33.187527   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:33.206870   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:33.674665   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:33.688666   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:33.735638   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:34.175289   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:34.191292   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:34.221286   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:34.675375   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:34.691369   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:34.713380   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:35.177410   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:35.196293   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:35.286202   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:35.662672   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:35.687740   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:35.711860   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:36.177974   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:36.192323   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:36.214580   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:36.666035   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:36.685121   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:36.725449   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:37.176637   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:37.188326   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:37.208335   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:37.675598   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:37.687250   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:37.709549   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:38.167248   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:38.180574   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:38.203018   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:38.675411   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:38.688105   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:38.714534   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:39.169327   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:39.181243   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:39.203704   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:39.673989   10808 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0226 11:49:39.674132   10808 kubeadm.go:1135] stopping kube-system containers ...
	I0226 11:49:39.684783   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 11:49:39.732830   10808 docker.go:483] Stopping containers: [1b59721a8112 3c54fc400caa bcb170ac4199 ae325a920cc8]
	I0226 11:49:39.743335   10808 ssh_runner.go:195] Run: docker stop 1b59721a8112 3c54fc400caa bcb170ac4199 ae325a920cc8
	I0226 11:49:39.797711   10808 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0226 11:49:39.837830   10808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:49:39.859423   10808 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 26 11:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 26 11:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 26 11:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Feb 26 11:42 /etc/kubernetes/scheduler.conf
	
	I0226 11:49:39.871787   10808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0226 11:49:39.908322   10808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0226 11:49:39.944115   10808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0226 11:49:39.977883   10808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0226 11:49:40.015032   10808 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 11:49:40.039808   10808 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0226 11:49:40.039808   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 11:49:40.181188   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 11:49:41.165915   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0226 11:49:41.530034   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 11:49:41.672288   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0226 11:49:41.874727   10808 api_server.go:52] waiting for apiserver process to appear ...
	I0226 11:49:41.891350   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:42.406577   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:42.914373   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:43.397264   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:43.894759   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:44.400022   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:44.896619   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:45.397604   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:45.900947   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:46.398854   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:46.897467   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:47.398497   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:47.895330   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:48.406126   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:48.900247   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:49.397322   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:49.889671   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:50.410917   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:50.909534   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:51.401990   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:51.902208   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:52.392077   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:52.905058   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:53.400372   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:53.891651   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:54.391237   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:54.892286   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:55.400563   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:55.898256   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:56.393037   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:56.886460   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:57.400013   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:57.898671   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:58.399254   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:58.901673   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:59.387793   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:59.889622   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:00.399900   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:00.902756   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:01.403499   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:01.907001   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:02.399760   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:02.896965   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:03.404926   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:03.888445   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:04.409044   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:04.892175   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:05.391769   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:05.896660   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:06.398900   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:06.901594   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:07.389437   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:07.894216   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:08.398601   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:08.890607   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:09.393262   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:09.897010   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:10.402368   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:10.889956   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:11.394015   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:11.895000   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:12.399200   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:12.890136   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:13.394767   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:13.899199   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:14.401818   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:14.900768   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:15.394065   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:15.901424   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:16.389555   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:16.894834   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:17.399279   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:17.893094   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:18.394160   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:18.907022   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:19.397358   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:19.904037   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:20.395170   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:20.907813   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:21.401518   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:21.894748   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:22.402952   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:22.903065   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:23.393297   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:23.905339   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:24.400473   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:24.905515   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:25.410398   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:25.892149   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:26.392928   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:26.898289   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:27.409737   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:27.895494   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:28.407988   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:28.902773   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:29.399943   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:29.904194   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:30.405277   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:30.892263   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:31.392648   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:31.909257   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:32.404505   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:32.906619   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:33.405739   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:33.906339   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:34.400600   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:34.903847   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:35.397865   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:35.892689   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:36.401075   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:36.896749   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:37.389741   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:37.892387   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:38.391015   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:38.889241   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:39.397493   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:39.898261   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:40.397147   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:40.903851   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:41.400789   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:41.893293   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:50:41.948970   10808 logs.go:276] 0 containers: []
	W0226 11:50:41.948970   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:50:41.962970   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:50:42.007972   10808 logs.go:276] 0 containers: []
	W0226 11:50:42.007972   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:50:42.021974   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:50:42.071115   10808 logs.go:276] 0 containers: []
	W0226 11:50:42.071159   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:50:42.086008   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:50:42.129676   10808 logs.go:276] 0 containers: []
	W0226 11:50:42.129676   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:50:42.140680   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:50:42.205482   10808 logs.go:276] 0 containers: []
	W0226 11:50:42.205482   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:50:42.215480   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:50:42.257249   10808 logs.go:276] 0 containers: []
	W0226 11:50:42.257249   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:50:42.269348   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:50:42.310136   10808 logs.go:276] 0 containers: []
	W0226 11:50:42.310136   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:50:42.322138   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:50:42.383145   10808 logs.go:276] 0 containers: []
	W0226 11:50:42.383145   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:50:42.383145   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:50:42.383145   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:50:42.415538   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:50:42.415538   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:50:42.556412   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:50:42.556412   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:50:42.556412   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:50:42.600484   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:50:42.600484   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:50:42.703496   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:50:42.703496   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:50:42.749476   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:24 old-k8s-version-321200 kubelet[1696]: E0226 11:50:24.131314    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:50:42.754484   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:26 old-k8s-version-321200 kubelet[1696]: E0226 11:50:26.110934    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:50:42.760501   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:29 old-k8s-version-321200 kubelet[1696]: E0226 11:50:29.107976    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:50:42.761486   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:29 old-k8s-version-321200 kubelet[1696]: E0226 11:50:29.120522    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:50:42.780494   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:37 old-k8s-version-321200 kubelet[1696]: E0226 11:50:37.124379    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:50:42.788486   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.164679    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:50:42.789484   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.168907    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:50:42.789484   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.176482    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:50:42.794486   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:50:42.794486   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:50:42.795480   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:50:42.795480   10808 out.go:239]   Feb 26 11:50:29 old-k8s-version-321200 kubelet[1696]: E0226 11:50:29.120522    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:50:29 old-k8s-version-321200 kubelet[1696]: E0226 11:50:29.120522    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:50:42.795480   10808 out.go:239]   Feb 26 11:50:37 old-k8s-version-321200 kubelet[1696]: E0226 11:50:37.124379    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:50:37 old-k8s-version-321200 kubelet[1696]: E0226 11:50:37.124379    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:50:42.795480   10808 out.go:239]   Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.164679    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.164679    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:50:42.795480   10808 out.go:239]   Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.168907    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.168907    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:50:42.795480   10808 out.go:239]   Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.176482    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.176482    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:50:42.795480   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:50:42.795480   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:50:52.817732   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:50:52.907960   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:50:52.959949   10808 logs.go:276] 0 containers: []
	W0226 11:50:52.959949   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:50:52.979997   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:50:53.041962   10808 logs.go:276] 0 containers: []
	W0226 11:50:53.041962   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:50:53.055956   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:50:53.112375   10808 logs.go:276] 0 containers: []
	W0226 11:50:53.112375   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:50:53.126586   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:50:53.175453   10808 logs.go:276] 0 containers: []
	W0226 11:50:53.175453   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:50:53.194472   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:50:53.240035   10808 logs.go:276] 0 containers: []
	W0226 11:50:53.240035   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:50:53.250034   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:50:53.298390   10808 logs.go:276] 0 containers: []
	W0226 11:50:53.298390   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:50:53.313371   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:50:53.363348   10808 logs.go:276] 0 containers: []
	W0226 11:50:53.364341   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:50:53.381356   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:50:53.433358   10808 logs.go:276] 0 containers: []
	W0226 11:50:53.433358   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:50:53.433358   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:50:53.433358   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:50:53.461355   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:50:53.461355   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:50:53.582713   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:50:53.582713   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:50:53.582713   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:50:53.635873   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:50:53.635873   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:50:53.737206   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:50:53.737617   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:50:53.812893   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:37 old-k8s-version-321200 kubelet[1696]: E0226 11:50:37.124379    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:50:53.821902   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.164679    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:50:53.822939   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.168907    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:50:53.823904   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.176482    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:50:53.852532   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.167814    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:50:53.853537   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.184121    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:50:53.854530   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.185357    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:50:53.857140   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:50:53.857140   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:50:53.857140   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:50:53.857140   10808 out.go:239]   Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.168907    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.168907    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:50:53.857140   10808 out.go:239]   Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.176482    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:50:40 old-k8s-version-321200 kubelet[1696]: E0226 11:50:40.176482    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:50:53.857976   10808 out.go:239]   Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.167814    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.167814    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:50:53.858051   10808 out.go:239]   Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.184121    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.184121    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:50:53.858118   10808 out.go:239]   Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.185357    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.185357    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:50:53.858162   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:50:53.858162   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:51:03.891141   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:51:03.925953   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:51:03.970016   10808 logs.go:276] 0 containers: []
	W0226 11:51:03.970016   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:51:03.979802   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:51:04.021140   10808 logs.go:276] 0 containers: []
	W0226 11:51:04.021261   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:51:04.037420   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:51:04.081546   10808 logs.go:276] 0 containers: []
	W0226 11:51:04.081546   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:51:04.093206   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:51:04.134891   10808 logs.go:276] 0 containers: []
	W0226 11:51:04.134891   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:51:04.145882   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:51:04.189305   10808 logs.go:276] 0 containers: []
	W0226 11:51:04.189394   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:51:04.199454   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:51:04.241333   10808 logs.go:276] 0 containers: []
	W0226 11:51:04.241411   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:51:04.251151   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:51:04.291071   10808 logs.go:276] 0 containers: []
	W0226 11:51:04.291071   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:51:04.302073   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:51:04.341955   10808 logs.go:276] 0 containers: []
	W0226 11:51:04.341955   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:51:04.341955   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:51:04.342090   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:51:04.415284   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.167814    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:04.416302   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.184121    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:04.417804   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.185357    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:04.425215   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:54 old-k8s-version-321200 kubelet[1696]: E0226 11:50:54.124923    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:04.450248   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:03 old-k8s-version-321200 kubelet[1696]: E0226 11:51:03.103792    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:04.453239   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:04 old-k8s-version-321200 kubelet[1696]: E0226 11:51:04.111892    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:51:04.454237   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:51:04.454237   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:51:04.480873   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:51:04.480873   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:51:04.619448   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:51:04.619523   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:51:04.619523   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:51:04.656054   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:51:04.656054   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:51:04.750890   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:04.750890   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:51:04.750890   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:51:04.750890   10808 out.go:239]   Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.184121    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.184121    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:04.750890   10808 out.go:239]   Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.185357    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.185357    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:04.750890   10808 out.go:239]   Feb 26 11:50:54 old-k8s-version-321200 kubelet[1696]: E0226 11:50:54.124923    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:50:54 old-k8s-version-321200 kubelet[1696]: E0226 11:50:54.124923    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:04.750890   10808 out.go:239]   Feb 26 11:51:03 old-k8s-version-321200 kubelet[1696]: E0226 11:51:03.103792    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:51:03 old-k8s-version-321200 kubelet[1696]: E0226 11:51:03.103792    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:04.750890   10808 out.go:239]   Feb 26 11:51:04 old-k8s-version-321200 kubelet[1696]: E0226 11:51:04.111892    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:04 old-k8s-version-321200 kubelet[1696]: E0226 11:51:04.111892    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:51:04.750890   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:04.750890   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:51:14.782934   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:51:14.817188   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:51:14.857135   10808 logs.go:276] 0 containers: []
	W0226 11:51:14.857135   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:51:14.867278   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:51:14.910162   10808 logs.go:276] 0 containers: []
	W0226 11:51:14.910201   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:51:14.921251   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:51:14.977013   10808 logs.go:276] 0 containers: []
	W0226 11:51:14.977079   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:51:14.987331   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:51:15.039322   10808 logs.go:276] 0 containers: []
	W0226 11:51:15.039322   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:51:15.050777   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:51:15.094336   10808 logs.go:276] 0 containers: []
	W0226 11:51:15.095334   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:51:15.103327   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:51:15.147180   10808 logs.go:276] 0 containers: []
	W0226 11:51:15.147237   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:51:15.160031   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:51:15.203667   10808 logs.go:276] 0 containers: []
	W0226 11:51:15.203714   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:51:15.211752   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:51:15.254522   10808 logs.go:276] 0 containers: []
	W0226 11:51:15.254522   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:51:15.254522   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:51:15.254522   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:51:15.332731   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:51:15.332823   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:51:15.374154   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.167814    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:15.375596   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.184121    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:15.376189   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:52 old-k8s-version-321200 kubelet[1696]: E0226 11:50:52.185357    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:15.381445   10808 logs.go:138] Found kubelet problem: Feb 26 11:50:54 old-k8s-version-321200 kubelet[1696]: E0226 11:50:54.124923    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:15.401151   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:03 old-k8s-version-321200 kubelet[1696]: E0226 11:51:03.103792    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:15.404172   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:04 old-k8s-version-321200 kubelet[1696]: E0226 11:51:04.111892    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:15.407240   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:05 old-k8s-version-321200 kubelet[1696]: E0226 11:51:05.109272    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:15.415638   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:09 old-k8s-version-321200 kubelet[1696]: E0226 11:51:09.102548    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:51:15.428538   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:51:15.428538   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:51:15.457997   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:51:15.458063   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:51:15.599076   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:51:15.599076   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:51:15.599076   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:51:15.631059   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:15.631059   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:51:15.631059   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:51:15.631059   10808 out.go:239]   Feb 26 11:50:54 old-k8s-version-321200 kubelet[1696]: E0226 11:50:54.124923    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:50:54 old-k8s-version-321200 kubelet[1696]: E0226 11:50:54.124923    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:15.631059   10808 out.go:239]   Feb 26 11:51:03 old-k8s-version-321200 kubelet[1696]: E0226 11:51:03.103792    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:51:03 old-k8s-version-321200 kubelet[1696]: E0226 11:51:03.103792    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:15.631059   10808 out.go:239]   Feb 26 11:51:04 old-k8s-version-321200 kubelet[1696]: E0226 11:51:04.111892    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:04 old-k8s-version-321200 kubelet[1696]: E0226 11:51:04.111892    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:15.632059   10808 out.go:239]   Feb 26 11:51:05 old-k8s-version-321200 kubelet[1696]: E0226 11:51:05.109272    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:51:05 old-k8s-version-321200 kubelet[1696]: E0226 11:51:05.109272    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:15.632059   10808 out.go:239]   Feb 26 11:51:09 old-k8s-version-321200 kubelet[1696]: E0226 11:51:09.102548    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:51:09 old-k8s-version-321200 kubelet[1696]: E0226 11:51:09.102548    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:51:15.632059   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:15.632059   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:51:25.665883   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:51:25.703511   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:51:25.752114   10808 logs.go:276] 0 containers: []
	W0226 11:51:25.752114   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:51:25.769670   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:51:25.834379   10808 logs.go:276] 0 containers: []
	W0226 11:51:25.834379   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:51:25.848377   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:51:25.893391   10808 logs.go:276] 0 containers: []
	W0226 11:51:25.893391   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:51:25.903411   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:51:25.971273   10808 logs.go:276] 0 containers: []
	W0226 11:51:25.971273   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:51:25.985305   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:51:26.028287   10808 logs.go:276] 0 containers: []
	W0226 11:51:26.028287   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:51:26.036278   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:51:26.077400   10808 logs.go:276] 0 containers: []
	W0226 11:51:26.077465   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:51:26.087056   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:51:26.129598   10808 logs.go:276] 0 containers: []
	W0226 11:51:26.129598   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:51:26.138970   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:51:26.179883   10808 logs.go:276] 0 containers: []
	W0226 11:51:26.179883   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:51:26.179883   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:51:26.179883   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:51:26.269472   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:51:26.269472   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:51:26.310462   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:03 old-k8s-version-321200 kubelet[1696]: E0226 11:51:03.103792    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:26.314851   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:04 old-k8s-version-321200 kubelet[1696]: E0226 11:51:04.111892    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:26.319174   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:05 old-k8s-version-321200 kubelet[1696]: E0226 11:51:05.109272    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:26.327402   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:09 old-k8s-version-321200 kubelet[1696]: E0226 11:51:09.102548    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:26.351828   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:17 old-k8s-version-321200 kubelet[1696]: E0226 11:51:17.104247    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:26.354333   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:18 old-k8s-version-321200 kubelet[1696]: E0226 11:51:18.111948    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:26.358349   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:19 old-k8s-version-321200 kubelet[1696]: E0226 11:51:19.106624    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:26.369434   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:23 old-k8s-version-321200 kubelet[1696]: E0226 11:51:23.118018    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:51:26.375721   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:51:26.375721   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:51:26.407259   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:51:26.407382   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:51:26.534723   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:51:26.534783   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:51:26.534848   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:51:26.571233   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:26.571233   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:51:26.571233   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:51:26.571233   10808 out.go:239]   Feb 26 11:51:09 old-k8s-version-321200 kubelet[1696]: E0226 11:51:09.102548    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:51:09 old-k8s-version-321200 kubelet[1696]: E0226 11:51:09.102548    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:26.571799   10808 out.go:239]   Feb 26 11:51:17 old-k8s-version-321200 kubelet[1696]: E0226 11:51:17.104247    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:51:17 old-k8s-version-321200 kubelet[1696]: E0226 11:51:17.104247    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:26.571799   10808 out.go:239]   Feb 26 11:51:18 old-k8s-version-321200 kubelet[1696]: E0226 11:51:18.111948    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:51:18 old-k8s-version-321200 kubelet[1696]: E0226 11:51:18.111948    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:26.571865   10808 out.go:239]   Feb 26 11:51:19 old-k8s-version-321200 kubelet[1696]: E0226 11:51:19.106624    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:19 old-k8s-version-321200 kubelet[1696]: E0226 11:51:19.106624    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:26.571926   10808 out.go:239]   Feb 26 11:51:23 old-k8s-version-321200 kubelet[1696]: E0226 11:51:23.118018    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:51:23 old-k8s-version-321200 kubelet[1696]: E0226 11:51:23.118018    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:51:26.571926   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:26.572057   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:51:36.594722   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:51:36.632726   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:51:36.672708   10808 logs.go:276] 0 containers: []
	W0226 11:51:36.672708   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:51:36.683701   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:51:36.723709   10808 logs.go:276] 0 containers: []
	W0226 11:51:36.723709   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:51:36.731708   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:51:36.770939   10808 logs.go:276] 0 containers: []
	W0226 11:51:36.770939   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:51:36.783138   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:51:36.850947   10808 logs.go:276] 0 containers: []
	W0226 11:51:36.850947   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:51:36.866143   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:51:36.912179   10808 logs.go:276] 0 containers: []
	W0226 11:51:36.912179   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:51:36.923166   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:51:36.965144   10808 logs.go:276] 0 containers: []
	W0226 11:51:36.965144   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:51:36.974148   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:51:37.018152   10808 logs.go:276] 0 containers: []
	W0226 11:51:37.018152   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:51:37.029155   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:51:37.071172   10808 logs.go:276] 0 containers: []
	W0226 11:51:37.071172   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:51:37.071172   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:51:37.071172   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:51:37.121155   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:17 old-k8s-version-321200 kubelet[1696]: E0226 11:51:17.104247    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:37.125161   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:18 old-k8s-version-321200 kubelet[1696]: E0226 11:51:18.111948    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:37.129177   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:19 old-k8s-version-321200 kubelet[1696]: E0226 11:51:19.106624    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:37.147171   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:23 old-k8s-version-321200 kubelet[1696]: E0226 11:51:23.118018    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:37.176147   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:29 old-k8s-version-321200 kubelet[1696]: E0226 11:51:29.131418    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:37.180153   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:30 old-k8s-version-321200 kubelet[1696]: E0226 11:51:30.113275    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:37.187161   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:32 old-k8s-version-321200 kubelet[1696]: E0226 11:51:32.129373    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:51:37.201159   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:51:37.201159   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:51:37.226164   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:51:37.226164   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:51:37.350148   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:51:37.350148   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:51:37.350148   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:51:37.395829   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:51:37.395829   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:51:37.473984   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:37.474119   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:51:37.474254   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:51:37.474322   10808 out.go:239]   Feb 26 11:51:19 old-k8s-version-321200 kubelet[1696]: E0226 11:51:19.106624    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:19 old-k8s-version-321200 kubelet[1696]: E0226 11:51:19.106624    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:37.474375   10808 out.go:239]   Feb 26 11:51:23 old-k8s-version-321200 kubelet[1696]: E0226 11:51:23.118018    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:51:23 old-k8s-version-321200 kubelet[1696]: E0226 11:51:23.118018    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:37.474375   10808 out.go:239]   Feb 26 11:51:29 old-k8s-version-321200 kubelet[1696]: E0226 11:51:29.131418    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:51:29 old-k8s-version-321200 kubelet[1696]: E0226 11:51:29.131418    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:37.474375   10808 out.go:239]   Feb 26 11:51:30 old-k8s-version-321200 kubelet[1696]: E0226 11:51:30.113275    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:30 old-k8s-version-321200 kubelet[1696]: E0226 11:51:30.113275    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:37.474375   10808 out.go:239]   Feb 26 11:51:32 old-k8s-version-321200 kubelet[1696]: E0226 11:51:32.129373    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:51:32 old-k8s-version-321200 kubelet[1696]: E0226 11:51:32.129373    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:51:37.474375   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:37.474375   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:51:47.514697   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:51:47.553666   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:51:47.590898   10808 logs.go:276] 0 containers: []
	W0226 11:51:47.590898   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:51:47.604451   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:51:47.643445   10808 logs.go:276] 0 containers: []
	W0226 11:51:47.643445   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:51:47.651450   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:51:47.690462   10808 logs.go:276] 0 containers: []
	W0226 11:51:47.690462   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:51:47.700458   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:51:47.756757   10808 logs.go:276] 0 containers: []
	W0226 11:51:47.756757   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:51:47.766461   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:51:47.811308   10808 logs.go:276] 0 containers: []
	W0226 11:51:47.811308   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:51:47.821307   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:51:47.862304   10808 logs.go:276] 0 containers: []
	W0226 11:51:47.862304   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:51:47.871310   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:51:47.906295   10808 logs.go:276] 0 containers: []
	W0226 11:51:47.906295   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:51:47.916032   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:51:47.968503   10808 logs.go:276] 0 containers: []
	W0226 11:51:47.968503   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:51:47.968503   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:51:47.968503   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:51:48.021282   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:29 old-k8s-version-321200 kubelet[1696]: E0226 11:51:29.131418    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:48.025298   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:30 old-k8s-version-321200 kubelet[1696]: E0226 11:51:30.113275    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:48.030282   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:32 old-k8s-version-321200 kubelet[1696]: E0226 11:51:32.129373    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:48.050280   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:38 old-k8s-version-321200 kubelet[1696]: E0226 11:51:38.107035    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:48.065278   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:43 old-k8s-version-321200 kubelet[1696]: E0226 11:51:43.125060    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:48.070317   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:45 old-k8s-version-321200 kubelet[1696]: E0226 11:51:45.109960    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:51:48.077264   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:51:48.077264   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:51:48.102267   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:51:48.102267   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:51:48.227637   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:51:48.227637   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:51:48.227637   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:51:48.272595   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:51:48.272595   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:51:48.351578   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:48.351578   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:51:48.351578   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:51:48.352718   10808 out.go:239]   Feb 26 11:51:30 old-k8s-version-321200 kubelet[1696]: E0226 11:51:30.113275    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:30 old-k8s-version-321200 kubelet[1696]: E0226 11:51:30.113275    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:48.352718   10808 out.go:239]   Feb 26 11:51:32 old-k8s-version-321200 kubelet[1696]: E0226 11:51:32.129373    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:51:32 old-k8s-version-321200 kubelet[1696]: E0226 11:51:32.129373    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:48.352718   10808 out.go:239]   Feb 26 11:51:38 old-k8s-version-321200 kubelet[1696]: E0226 11:51:38.107035    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:51:38 old-k8s-version-321200 kubelet[1696]: E0226 11:51:38.107035    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:48.352718   10808 out.go:239]   Feb 26 11:51:43 old-k8s-version-321200 kubelet[1696]: E0226 11:51:43.125060    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:51:43 old-k8s-version-321200 kubelet[1696]: E0226 11:51:43.125060    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:48.352718   10808 out.go:239]   Feb 26 11:51:45 old-k8s-version-321200 kubelet[1696]: E0226 11:51:45.109960    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:45 old-k8s-version-321200 kubelet[1696]: E0226 11:51:45.109960    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:51:48.352718   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:48.352718   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:51:58.390666   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:51:58.428663   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:51:58.473927   10808 logs.go:276] 0 containers: []
	W0226 11:51:58.473927   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:51:58.483912   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:51:58.524922   10808 logs.go:276] 0 containers: []
	W0226 11:51:58.524922   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:51:58.537919   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:51:58.580919   10808 logs.go:276] 0 containers: []
	W0226 11:51:58.580919   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:51:58.589929   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:51:58.632915   10808 logs.go:276] 0 containers: []
	W0226 11:51:58.632915   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:51:58.640921   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:51:58.684917   10808 logs.go:276] 0 containers: []
	W0226 11:51:58.684917   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:51:58.700961   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:51:58.747955   10808 logs.go:276] 0 containers: []
	W0226 11:51:58.747955   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:51:58.757925   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:51:58.801913   10808 logs.go:276] 0 containers: []
	W0226 11:51:58.801913   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:51:58.815917   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:51:58.868932   10808 logs.go:276] 0 containers: []
	W0226 11:51:58.868932   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:51:58.868932   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:51:58.868932   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:51:58.902938   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:51:58.902938   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:51:59.036929   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:51:59.036929   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:51:59.036929   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:51:59.082923   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:51:59.082923   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:51:59.183940   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:51:59.183940   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:51:59.233752   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:38 old-k8s-version-321200 kubelet[1696]: E0226 11:51:38.107035    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:59.243753   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:43 old-k8s-version-321200 kubelet[1696]: E0226 11:51:43.125060    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:59.249761   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:45 old-k8s-version-321200 kubelet[1696]: E0226 11:51:45.109960    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:59.258772   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:48 old-k8s-version-321200 kubelet[1696]: E0226 11:51:48.096747    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:59.269771   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:51 old-k8s-version-321200 kubelet[1696]: E0226 11:51:51.093844    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:59.285761   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:56 old-k8s-version-321200 kubelet[1696]: E0226 11:51:56.121400    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:59.288757   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:57 old-k8s-version-321200 kubelet[1696]: E0226 11:51:57.146583    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:51:59.295759   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:59.295759   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:51:59.295759   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:51:59.295759   10808 out.go:239]   Feb 26 11:51:45 old-k8s-version-321200 kubelet[1696]: E0226 11:51:45.109960    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:45 old-k8s-version-321200 kubelet[1696]: E0226 11:51:45.109960    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:51:59.295759   10808 out.go:239]   Feb 26 11:51:48 old-k8s-version-321200 kubelet[1696]: E0226 11:51:48.096747    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:51:48 old-k8s-version-321200 kubelet[1696]: E0226 11:51:48.096747    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:51:59.295759   10808 out.go:239]   Feb 26 11:51:51 old-k8s-version-321200 kubelet[1696]: E0226 11:51:51.093844    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:51:51 old-k8s-version-321200 kubelet[1696]: E0226 11:51:51.093844    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:51:59.295759   10808 out.go:239]   Feb 26 11:51:56 old-k8s-version-321200 kubelet[1696]: E0226 11:51:56.121400    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:51:56 old-k8s-version-321200 kubelet[1696]: E0226 11:51:56.121400    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:51:59.295759   10808 out.go:239]   Feb 26 11:51:57 old-k8s-version-321200 kubelet[1696]: E0226 11:51:57.146583    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:57 old-k8s-version-321200 kubelet[1696]: E0226 11:51:57.146583    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:51:59.295759   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:51:59.295759   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:52:09.319257   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:52:09.355559   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:52:09.410771   10808 logs.go:276] 0 containers: []
	W0226 11:52:09.410771   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:52:09.420793   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:52:09.468223   10808 logs.go:276] 0 containers: []
	W0226 11:52:09.468223   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:52:09.480223   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:52:09.531207   10808 logs.go:276] 0 containers: []
	W0226 11:52:09.531207   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:52:09.541474   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:52:09.585206   10808 logs.go:276] 0 containers: []
	W0226 11:52:09.585206   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:52:09.599220   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:52:09.639224   10808 logs.go:276] 0 containers: []
	W0226 11:52:09.639224   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:52:09.649215   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:52:09.692221   10808 logs.go:276] 0 containers: []
	W0226 11:52:09.692221   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:52:09.704222   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:52:09.746201   10808 logs.go:276] 0 containers: []
	W0226 11:52:09.746201   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:52:09.756205   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:52:09.804233   10808 logs.go:276] 0 containers: []
	W0226 11:52:09.804233   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:52:09.804233   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:52:09.804233   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:52:09.897092   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:52:09.897092   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:52:09.937939   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:48 old-k8s-version-321200 kubelet[1696]: E0226 11:51:48.096747    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:09.944941   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:51 old-k8s-version-321200 kubelet[1696]: E0226 11:51:51.093844    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:09.958526   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:56 old-k8s-version-321200 kubelet[1696]: E0226 11:51:56.121400    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:09.963490   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:57 old-k8s-version-321200 kubelet[1696]: E0226 11:51:57.146583    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:09.988885   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:03 old-k8s-version-321200 kubelet[1696]: E0226 11:52:03.126081    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:09.992892   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:04 old-k8s-version-321200 kubelet[1696]: E0226 11:52:04.112469    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:10.001397   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:07 old-k8s-version-321200 kubelet[1696]: E0226 11:52:07.099251    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0226 11:52:10.006839   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:52:10.006839   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:52:10.036173   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:52:10.037165   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:52:10.166302   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:52:10.166302   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:52:10.166302   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:52:10.208305   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:10.208305   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:52:10.208305   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:52:10.208305   10808 out.go:239]   Feb 26 11:51:56 old-k8s-version-321200 kubelet[1696]: E0226 11:51:56.121400    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:51:56 old-k8s-version-321200 kubelet[1696]: E0226 11:51:56.121400    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:10.208305   10808 out.go:239]   Feb 26 11:51:57 old-k8s-version-321200 kubelet[1696]: E0226 11:51:57.146583    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:51:57 old-k8s-version-321200 kubelet[1696]: E0226 11:51:57.146583    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:10.208305   10808 out.go:239]   Feb 26 11:52:03 old-k8s-version-321200 kubelet[1696]: E0226 11:52:03.126081    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:52:03 old-k8s-version-321200 kubelet[1696]: E0226 11:52:03.126081    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:10.208305   10808 out.go:239]   Feb 26 11:52:04 old-k8s-version-321200 kubelet[1696]: E0226 11:52:04.112469    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:04 old-k8s-version-321200 kubelet[1696]: E0226 11:52:04.112469    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:10.208305   10808 out.go:239]   Feb 26 11:52:07 old-k8s-version-321200 kubelet[1696]: E0226 11:52:07.099251    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:52:07 old-k8s-version-321200 kubelet[1696]: E0226 11:52:07.099251    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0226 11:52:10.208305   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:10.208305   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:52:20.238533   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:52:20.285086   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:52:20.333824   10808 logs.go:276] 0 containers: []
	W0226 11:52:20.333891   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:52:20.343533   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:52:20.395537   10808 logs.go:276] 0 containers: []
	W0226 11:52:20.395537   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:52:20.404528   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:52:20.444558   10808 logs.go:276] 0 containers: []
	W0226 11:52:20.444628   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:52:20.460685   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:52:20.514067   10808 logs.go:276] 0 containers: []
	W0226 11:52:20.514067   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:52:20.524058   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:52:20.567196   10808 logs.go:276] 0 containers: []
	W0226 11:52:20.568195   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:52:20.581207   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:52:20.621179   10808 logs.go:276] 0 containers: []
	W0226 11:52:20.621179   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:52:20.629175   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:52:20.676702   10808 logs.go:276] 0 containers: []
	W0226 11:52:20.676856   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:52:20.687393   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:52:20.735361   10808 logs.go:276] 0 containers: []
	W0226 11:52:20.735361   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:52:20.735361   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:52:20.735361   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:52:20.788215   10808 logs.go:138] Found kubelet problem: Feb 26 11:51:57 old-k8s-version-321200 kubelet[1696]: E0226 11:51:57.146583    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:20.808169   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:03 old-k8s-version-321200 kubelet[1696]: E0226 11:52:03.126081    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:20.812186   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:04 old-k8s-version-321200 kubelet[1696]: E0226 11:52:04.112469    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:20.819170   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:07 old-k8s-version-321200 kubelet[1696]: E0226 11:52:07.099251    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:20.827159   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:10 old-k8s-version-321200 kubelet[1696]: E0226 11:52:10.110771    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:20.845175   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:16 old-k8s-version-321200 kubelet[1696]: E0226 11:52:16.127932    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:20.849183   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:18 old-k8s-version-321200 kubelet[1696]: E0226 11:52:18.111786    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:52:20.856168   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:52:20.856168   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:52:20.892181   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:52:20.892181   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:52:21.013170   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:52:21.013170   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:52:21.013170   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:52:21.046173   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:52:21.046173   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:52:21.150171   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:21.150171   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:52:21.150171   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:52:21.150171   10808 out.go:239]   Feb 26 11:52:04 old-k8s-version-321200 kubelet[1696]: E0226 11:52:04.112469    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:04 old-k8s-version-321200 kubelet[1696]: E0226 11:52:04.112469    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:21.150171   10808 out.go:239]   Feb 26 11:52:07 old-k8s-version-321200 kubelet[1696]: E0226 11:52:07.099251    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:52:07 old-k8s-version-321200 kubelet[1696]: E0226 11:52:07.099251    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:21.150171   10808 out.go:239]   Feb 26 11:52:10 old-k8s-version-321200 kubelet[1696]: E0226 11:52:10.110771    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:52:10 old-k8s-version-321200 kubelet[1696]: E0226 11:52:10.110771    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:21.150171   10808 out.go:239]   Feb 26 11:52:16 old-k8s-version-321200 kubelet[1696]: E0226 11:52:16.127932    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:52:16 old-k8s-version-321200 kubelet[1696]: E0226 11:52:16.127932    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:21.150171   10808 out.go:239]   Feb 26 11:52:18 old-k8s-version-321200 kubelet[1696]: E0226 11:52:18.111786    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:18 old-k8s-version-321200 kubelet[1696]: E0226 11:52:18.111786    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:52:21.150171   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:21.150171   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:52:31.188830   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:52:31.228839   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:52:31.274858   10808 logs.go:276] 0 containers: []
	W0226 11:52:31.274858   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:52:31.289828   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:52:31.335830   10808 logs.go:276] 0 containers: []
	W0226 11:52:31.335830   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:52:31.349834   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:52:31.405855   10808 logs.go:276] 0 containers: []
	W0226 11:52:31.405855   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:52:31.422842   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:52:31.482669   10808 logs.go:276] 0 containers: []
	W0226 11:52:31.482669   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:52:31.499691   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:52:31.552659   10808 logs.go:276] 0 containers: []
	W0226 11:52:31.552659   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:52:31.569674   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:52:31.641657   10808 logs.go:276] 0 containers: []
	W0226 11:52:31.641657   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:52:31.656732   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:52:31.709680   10808 logs.go:276] 0 containers: []
	W0226 11:52:31.709680   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:52:31.724669   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:52:31.784863   10808 logs.go:276] 0 containers: []
	W0226 11:52:31.784863   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:52:31.784863   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:52:31.784863   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:52:31.826833   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:52:31.826833   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:52:31.930832   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:52:31.930832   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:52:31.994406   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:10 old-k8s-version-321200 kubelet[1696]: E0226 11:52:10.110771    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:32.025903   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:16 old-k8s-version-321200 kubelet[1696]: E0226 11:52:16.127932    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:32.036629   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:18 old-k8s-version-321200 kubelet[1696]: E0226 11:52:18.111786    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:32.053743   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:22 old-k8s-version-321200 kubelet[1696]: E0226 11:52:22.112851    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:32.070745   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:26 old-k8s-version-321200 kubelet[1696]: E0226 11:52:26.121565    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:32.082758   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:29 old-k8s-version-321200 kubelet[1696]: E0226 11:52:29.129763    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:32.086751   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:30 old-k8s-version-321200 kubelet[1696]: E0226 11:52:30.122574    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:52:32.092736   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:52:32.093741   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:52:32.131763   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:52:32.131763   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:52:32.307203   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:52:32.307203   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:32.307203   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:52:32.307203   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:52:32.307203   10808 out.go:239]   Feb 26 11:52:18 old-k8s-version-321200 kubelet[1696]: E0226 11:52:18.111786    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:18 old-k8s-version-321200 kubelet[1696]: E0226 11:52:18.111786    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:32.307203   10808 out.go:239]   Feb 26 11:52:22 old-k8s-version-321200 kubelet[1696]: E0226 11:52:22.112851    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:52:22 old-k8s-version-321200 kubelet[1696]: E0226 11:52:22.112851    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:32.307203   10808 out.go:239]   Feb 26 11:52:26 old-k8s-version-321200 kubelet[1696]: E0226 11:52:26.121565    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:52:26 old-k8s-version-321200 kubelet[1696]: E0226 11:52:26.121565    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:32.307203   10808 out.go:239]   Feb 26 11:52:29 old-k8s-version-321200 kubelet[1696]: E0226 11:52:29.129763    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:29 old-k8s-version-321200 kubelet[1696]: E0226 11:52:29.129763    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:32.308189   10808 out.go:239]   Feb 26 11:52:30 old-k8s-version-321200 kubelet[1696]: E0226 11:52:30.122574    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:52:30 old-k8s-version-321200 kubelet[1696]: E0226 11:52:30.122574    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:52:32.308189   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:32.308189   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:52:42.348836   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:52:44.316927   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:52:44.359796   10808 logs.go:276] 0 containers: []
	W0226 11:52:44.359796   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:52:44.370876   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:52:44.413941   10808 logs.go:276] 0 containers: []
	W0226 11:52:44.414080   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:52:44.424688   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:52:44.476351   10808 logs.go:276] 0 containers: []
	W0226 11:52:44.476351   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:52:44.488493   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:52:44.533393   10808 logs.go:276] 0 containers: []
	W0226 11:52:44.533460   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:52:44.544353   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:52:44.583577   10808 logs.go:276] 0 containers: []
	W0226 11:52:44.583647   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:52:44.596466   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:52:44.636015   10808 logs.go:276] 0 containers: []
	W0226 11:52:44.636015   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:52:44.649535   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:52:44.690002   10808 logs.go:276] 0 containers: []
	W0226 11:52:44.690109   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:52:44.702186   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:52:44.745942   10808 logs.go:276] 0 containers: []
	W0226 11:52:44.745942   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:52:44.745942   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:52:44.745942   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:52:44.793218   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:22 old-k8s-version-321200 kubelet[1696]: E0226 11:52:22.112851    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:44.802173   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:26 old-k8s-version-321200 kubelet[1696]: E0226 11:52:26.121565    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:44.809003   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:29 old-k8s-version-321200 kubelet[1696]: E0226 11:52:29.129763    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:44.812046   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:30 old-k8s-version-321200 kubelet[1696]: E0226 11:52:30.122574    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:44.832611   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:37 old-k8s-version-321200 kubelet[1696]: E0226 11:52:37.110804    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:44.832894   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:37 old-k8s-version-321200 kubelet[1696]: E0226 11:52:37.112284    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:44.845560   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.104460    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:44.846558   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.106022    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:52:44.848782   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:52:44.848782   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:52:44.874935   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:52:44.874935   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:52:44.979717   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:52:44.979797   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:52:44.979797   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:52:45.019827   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:52:45.019827   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:52:45.102667   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:45.102667   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:52:45.102667   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:52:45.102667   10808 out.go:239]   Feb 26 11:52:30 old-k8s-version-321200 kubelet[1696]: E0226 11:52:30.122574    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:52:30 old-k8s-version-321200 kubelet[1696]: E0226 11:52:30.122574    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:45.102667   10808 out.go:239]   Feb 26 11:52:37 old-k8s-version-321200 kubelet[1696]: E0226 11:52:37.110804    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:52:37 old-k8s-version-321200 kubelet[1696]: E0226 11:52:37.110804    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:45.102667   10808 out.go:239]   Feb 26 11:52:37 old-k8s-version-321200 kubelet[1696]: E0226 11:52:37.112284    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:52:37 old-k8s-version-321200 kubelet[1696]: E0226 11:52:37.112284    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:45.103704   10808 out.go:239]   Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.104460    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.104460    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:45.103704   10808 out.go:239]   Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.106022    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.106022    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:52:45.103704   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:45.103704   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:52:55.141509   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:52:55.188217   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:52:55.237210   10808 logs.go:276] 0 containers: []
	W0226 11:52:55.237210   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:52:55.248211   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:52:55.288217   10808 logs.go:276] 0 containers: []
	W0226 11:52:55.288217   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:52:55.299206   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:52:55.340222   10808 logs.go:276] 0 containers: []
	W0226 11:52:55.340222   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:52:55.353208   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:52:55.397412   10808 logs.go:276] 0 containers: []
	W0226 11:52:55.397455   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:52:55.409181   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:52:55.452032   10808 logs.go:276] 0 containers: []
	W0226 11:52:55.452032   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:52:55.460023   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:52:55.504029   10808 logs.go:276] 0 containers: []
	W0226 11:52:55.504029   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:52:55.514035   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:52:55.563059   10808 logs.go:276] 0 containers: []
	W0226 11:52:55.563059   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:52:55.575011   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:52:55.630019   10808 logs.go:276] 0 containers: []
	W0226 11:52:55.630019   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:52:55.630019   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:52:55.630019   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:52:55.745815   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:52:55.745815   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:52:55.823126   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:37 old-k8s-version-321200 kubelet[1696]: E0226 11:52:37.110804    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:55.824116   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:37 old-k8s-version-321200 kubelet[1696]: E0226 11:52:37.112284    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:55.843116   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.104460    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:55.844111   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.106022    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:55.855124   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:48 old-k8s-version-321200 kubelet[1696]: E0226 11:52:48.100115    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:55.863110   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:50 old-k8s-version-321200 kubelet[1696]: E0226 11:52:50.104128    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:55.876128   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:54 old-k8s-version-321200 kubelet[1696]: E0226 11:52:54.107263    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:52:55.882113   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:52:55.882113   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:52:55.920991   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:52:55.920991   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:52:56.074170   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:52:56.075151   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:52:56.075151   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:52:56.116534   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:56.116534   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:52:56.116534   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:52:56.116534   10808 out.go:239]   Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.104460    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.104460    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:52:56.116534   10808 out.go:239]   Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.106022    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.106022    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:52:56.116534   10808 out.go:239]   Feb 26 11:52:48 old-k8s-version-321200 kubelet[1696]: E0226 11:52:48.100115    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:52:48 old-k8s-version-321200 kubelet[1696]: E0226 11:52:48.100115    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:52:56.116534   10808 out.go:239]   Feb 26 11:52:50 old-k8s-version-321200 kubelet[1696]: E0226 11:52:50.104128    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:52:50 old-k8s-version-321200 kubelet[1696]: E0226 11:52:50.104128    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:52:56.116534   10808 out.go:239]   Feb 26 11:52:54 old-k8s-version-321200 kubelet[1696]: E0226 11:52:54.107263    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:54 old-k8s-version-321200 kubelet[1696]: E0226 11:52:54.107263    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:52:56.116534   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:52:56.116534   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:53:06.152799   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:53:06.194786   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:53:06.236784   10808 logs.go:276] 0 containers: []
	W0226 11:53:06.236784   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:53:06.245786   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:53:06.288801   10808 logs.go:276] 0 containers: []
	W0226 11:53:06.288801   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:53:06.299793   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:53:06.344002   10808 logs.go:276] 0 containers: []
	W0226 11:53:06.344002   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:53:06.357082   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:53:06.406052   10808 logs.go:276] 0 containers: []
	W0226 11:53:06.406052   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:53:06.418061   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:53:06.459060   10808 logs.go:276] 0 containers: []
	W0226 11:53:06.459060   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:53:06.472068   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:53:06.517067   10808 logs.go:276] 0 containers: []
	W0226 11:53:06.517067   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:53:06.527047   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:53:06.568052   10808 logs.go:276] 0 containers: []
	W0226 11:53:06.568052   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:53:06.578085   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:53:06.629503   10808 logs.go:276] 0 containers: []
	W0226 11:53:06.629503   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:53:06.629503   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:53:06.629503   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:53:06.651516   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:53:06.651516   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:53:06.779126   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:53:06.779126   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:53:06.779126   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:53:06.817754   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:53:06.817754   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:53:06.899749   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:53:06.899749   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:53:06.942753   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.104460    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:06.942753   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:43 old-k8s-version-321200 kubelet[1696]: E0226 11:52:43.106022    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:06.952737   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:48 old-k8s-version-321200 kubelet[1696]: E0226 11:52:48.100115    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:06.957737   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:50 old-k8s-version-321200 kubelet[1696]: E0226 11:52:50.104128    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:06.971740   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:54 old-k8s-version-321200 kubelet[1696]: E0226 11:52:54.107263    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:06.987742   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:58 old-k8s-version-321200 kubelet[1696]: E0226 11:52:58.128356    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:07.001747   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.122972    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:07.001747   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.166473    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0226 11:53:07.014735   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:53:07.015737   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:53:07.015737   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:53:07.015737   10808 out.go:239]   Feb 26 11:52:50 old-k8s-version-321200 kubelet[1696]: E0226 11:52:50.104128    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:52:50 old-k8s-version-321200 kubelet[1696]: E0226 11:52:50.104128    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:07.015737   10808 out.go:239]   Feb 26 11:52:54 old-k8s-version-321200 kubelet[1696]: E0226 11:52:54.107263    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:52:54 old-k8s-version-321200 kubelet[1696]: E0226 11:52:54.107263    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:07.015737   10808 out.go:239]   Feb 26 11:52:58 old-k8s-version-321200 kubelet[1696]: E0226 11:52:58.128356    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:52:58 old-k8s-version-321200 kubelet[1696]: E0226 11:52:58.128356    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:07.015737   10808 out.go:239]   Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.122972    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.122972    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:07.015737   10808 out.go:239]   Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.166473    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.166473    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0226 11:53:07.015737   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:53:07.015737   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:53:17.046007   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:53:17.097383   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:53:17.137390   10808 logs.go:276] 0 containers: []
	W0226 11:53:17.137390   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:53:17.149401   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:53:17.203401   10808 logs.go:276] 0 containers: []
	W0226 11:53:17.341691   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:53:17.350820   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:53:17.392736   10808 logs.go:276] 0 containers: []
	W0226 11:53:17.392736   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:53:17.405715   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:53:17.448110   10808 logs.go:276] 0 containers: []
	W0226 11:53:17.448110   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:53:17.457816   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:53:17.512958   10808 logs.go:276] 0 containers: []
	W0226 11:53:17.512958   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:53:17.524948   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:53:17.574033   10808 logs.go:276] 0 containers: []
	W0226 11:53:17.574033   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:53:17.592047   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:53:17.636904   10808 logs.go:276] 0 containers: []
	W0226 11:53:17.636904   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:53:17.650793   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:53:17.710245   10808 logs.go:276] 0 containers: []
	W0226 11:53:17.710344   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:53:17.710413   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:53:17.710413   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:53:17.775560   10808 logs.go:138] Found kubelet problem: Feb 26 11:52:58 old-k8s-version-321200 kubelet[1696]: E0226 11:52:58.128356    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:17.788568   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.122972    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:17.790560   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.166473    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:17.821145   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:09 old-k8s-version-321200 kubelet[1696]: E0226 11:53:09.115952    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:17.830383   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:11 old-k8s-version-321200 kubelet[1696]: E0226 11:53:11.104644    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:17.841381   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:15 old-k8s-version-321200 kubelet[1696]: E0226 11:53:15.102305    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:17.846376   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:16 old-k8s-version-321200 kubelet[1696]: E0226 11:53:16.110084    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:53:17.855182   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:53:17.855309   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:53:17.903825   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:53:17.903825   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:53:18.074739   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:53:18.074739   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:53:18.074739   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:53:18.126559   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:53:18.126559   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:53:18.226543   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:53:18.226543   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:53:18.226543   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:53:18.226543   10808 out.go:239]   Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.166473    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:53:01 old-k8s-version-321200 kubelet[1696]: E0226 11:53:01.166473    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:18.226543   10808 out.go:239]   Feb 26 11:53:09 old-k8s-version-321200 kubelet[1696]: E0226 11:53:09.115952    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:53:09 old-k8s-version-321200 kubelet[1696]: E0226 11:53:09.115952    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:18.226543   10808 out.go:239]   Feb 26 11:53:11 old-k8s-version-321200 kubelet[1696]: E0226 11:53:11.104644    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:53:11 old-k8s-version-321200 kubelet[1696]: E0226 11:53:11.104644    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:18.226543   10808 out.go:239]   Feb 26 11:53:15 old-k8s-version-321200 kubelet[1696]: E0226 11:53:15.102305    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:53:15 old-k8s-version-321200 kubelet[1696]: E0226 11:53:15.102305    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:18.226543   10808 out.go:239]   Feb 26 11:53:16 old-k8s-version-321200 kubelet[1696]: E0226 11:53:16.110084    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:53:16 old-k8s-version-321200 kubelet[1696]: E0226 11:53:16.110084    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 11:53:18.226543   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:53:18.226543   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:53:28.250405   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:53:28.295910   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:53:28.341793   10808 logs.go:276] 0 containers: []
	W0226 11:53:28.341793   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:53:28.354099   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:53:28.412621   10808 logs.go:276] 0 containers: []
	W0226 11:53:28.412621   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:53:28.422623   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:53:28.464185   10808 logs.go:276] 0 containers: []
	W0226 11:53:28.464185   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:53:28.481487   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:53:28.539796   10808 logs.go:276] 0 containers: []
	W0226 11:53:28.539796   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:53:28.554462   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:53:28.608674   10808 logs.go:276] 0 containers: []
	W0226 11:53:28.608674   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:53:28.618700   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:53:28.667681   10808 logs.go:276] 0 containers: []
	W0226 11:53:28.667681   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:53:28.682682   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:53:28.732067   10808 logs.go:276] 0 containers: []
	W0226 11:53:28.732067   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:53:28.743753   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:53:28.796825   10808 logs.go:276] 0 containers: []
	W0226 11:53:28.796825   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:53:28.797456   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:53:28.797715   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:53:28.845931   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:09 old-k8s-version-321200 kubelet[1696]: E0226 11:53:09.115952    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:28.851926   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:11 old-k8s-version-321200 kubelet[1696]: E0226 11:53:11.104644    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:28.863928   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:15 old-k8s-version-321200 kubelet[1696]: E0226 11:53:15.102305    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:28.867930   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:16 old-k8s-version-321200 kubelet[1696]: E0226 11:53:16.110084    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:28.882961   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:21 old-k8s-version-321200 kubelet[1696]: E0226 11:53:21.127676    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:28.888940   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:23 old-k8s-version-321200 kubelet[1696]: E0226 11:53:23.109756    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:53:28.904924   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:53:28.904924   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:53:28.939206   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:53:28.939327   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:53:29.074582   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:53:29.074582   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:53:29.074582   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:53:29.122562   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:53:29.122562   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:53:29.229256   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:53:29.229256   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:53:29.229256   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:53:29.229256   10808 out.go:239]   Feb 26 11:53:11 old-k8s-version-321200 kubelet[1696]: E0226 11:53:11.104644    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:53:11 old-k8s-version-321200 kubelet[1696]: E0226 11:53:11.104644    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:29.229256   10808 out.go:239]   Feb 26 11:53:15 old-k8s-version-321200 kubelet[1696]: E0226 11:53:15.102305    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:53:15 old-k8s-version-321200 kubelet[1696]: E0226 11:53:15.102305    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:29.229256   10808 out.go:239]   Feb 26 11:53:16 old-k8s-version-321200 kubelet[1696]: E0226 11:53:16.110084    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:53:16 old-k8s-version-321200 kubelet[1696]: E0226 11:53:16.110084    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:29.229256   10808 out.go:239]   Feb 26 11:53:21 old-k8s-version-321200 kubelet[1696]: E0226 11:53:21.127676    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:53:21 old-k8s-version-321200 kubelet[1696]: E0226 11:53:21.127676    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:29.229256   10808 out.go:239]   Feb 26 11:53:23 old-k8s-version-321200 kubelet[1696]: E0226 11:53:23.109756    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:53:23 old-k8s-version-321200 kubelet[1696]: E0226 11:53:23.109756    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 11:53:29.229256   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:53:29.229256   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:53:39.259147   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:53:39.304829   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 11:53:39.344045   10808 logs.go:276] 0 containers: []
	W0226 11:53:39.344045   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 11:53:39.353047   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 11:53:39.401186   10808 logs.go:276] 0 containers: []
	W0226 11:53:39.401186   10808 logs.go:278] No container was found matching "etcd"
	I0226 11:53:39.410194   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 11:53:39.449192   10808 logs.go:276] 0 containers: []
	W0226 11:53:39.449192   10808 logs.go:278] No container was found matching "coredns"
	I0226 11:53:39.459700   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 11:53:39.506901   10808 logs.go:276] 0 containers: []
	W0226 11:53:39.506901   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 11:53:39.520378   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 11:53:39.572344   10808 logs.go:276] 0 containers: []
	W0226 11:53:39.572444   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 11:53:39.587314   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 11:53:39.634360   10808 logs.go:276] 0 containers: []
	W0226 11:53:39.634360   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 11:53:39.647381   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 11:53:39.687830   10808 logs.go:276] 0 containers: []
	W0226 11:53:39.687830   10808 logs.go:278] No container was found matching "kindnet"
	I0226 11:53:39.702736   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 11:53:39.739622   10808 logs.go:276] 0 containers: []
	W0226 11:53:39.739622   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 11:53:39.739622   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 11:53:39.739622   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:53:39.800618   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:21 old-k8s-version-321200 kubelet[1696]: E0226 11:53:21.127676    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 11:53:39.806620   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:23 old-k8s-version-321200 kubelet[1696]: E0226 11:53:23.109756    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:39.821626   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:29 old-k8s-version-321200 kubelet[1696]: E0226 11:53:29.110632    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:39.824616   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:30 old-k8s-version-321200 kubelet[1696]: E0226 11:53:30.117048    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:39.840114   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:35 old-k8s-version-321200 kubelet[1696]: E0226 11:53:35.106908    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:39.840383   10808 logs.go:138] Found kubelet problem: Feb 26 11:53:35 old-k8s-version-321200 kubelet[1696]: E0226 11:53:35.112187    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:53:39.850883   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 11:53:39.850883   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:53:39.880804   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:53:39.881404   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 11:53:40.013286   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 11:53:40.013286   10808 logs.go:123] Gathering logs for Docker ...
	I0226 11:53:40.013286   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 11:53:40.055152   10808 logs.go:123] Gathering logs for container status ...
	I0226 11:53:40.055152   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:53:40.178641   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:53:40.178641   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:53:40.178641   10808 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0226 11:53:40.178641   10808 out.go:239]   Feb 26 11:53:23 old-k8s-version-321200 kubelet[1696]: E0226 11:53:23.109756    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:53:23 old-k8s-version-321200 kubelet[1696]: E0226 11:53:23.109756    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:40.178641   10808 out.go:239]   Feb 26 11:53:29 old-k8s-version-321200 kubelet[1696]: E0226 11:53:29.110632    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 26 11:53:29 old-k8s-version-321200 kubelet[1696]: E0226 11:53:29.110632    1696 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 11:53:40.178641   10808 out.go:239]   Feb 26 11:53:30 old-k8s-version-321200 kubelet[1696]: E0226 11:53:30.117048    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 26 11:53:30 old-k8s-version-321200 kubelet[1696]: E0226 11:53:30.117048    1696 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 11:53:40.178641   10808 out.go:239]   Feb 26 11:53:35 old-k8s-version-321200 kubelet[1696]: E0226 11:53:35.106908    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 26 11:53:35 old-k8s-version-321200 kubelet[1696]: E0226 11:53:35.106908    1696 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 11:53:40.178641   10808 out.go:239]   Feb 26 11:53:35 old-k8s-version-321200 kubelet[1696]: E0226 11:53:35.112187    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 26 11:53:35 old-k8s-version-321200 kubelet[1696]: E0226 11:53:35.112187    1696 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0226 11:53:40.178641   10808 out.go:304] Setting ErrFile to fd 1804...
	I0226 11:53:40.178641   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:53:50.223226   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:53:50.251224   10808 kubeadm.go:640] restartCluster took 4m20.8536889s
	W0226 11:53:50.251224   10808 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0226 11:53:50.251224   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 11:53:54.176449   10808 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.9251959s)
	I0226 11:53:54.196427   10808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:53:54.235029   10808 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 11:53:54.257243   10808 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:53:54.281515   10808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:53:54.309569   10808 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:53:54.310584   10808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:53:54.694644   10808 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 11:53:54.694644   10808 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 11:53:54.835957   10808 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 11:53:55.034169   10808 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:57:57.731719   10808 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 11:57:57.732052   10808 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 11:57:57.738862   10808 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 11:57:57.738983   10808 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:57:57.741440   10808 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:57:57.741440   10808 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:57:57.741440   10808 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:57:57.742434   10808 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:57:57.742434   10808 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:57:57.742434   10808 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 11:57:57.742434   10808 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:57:57.746432   10808 out.go:204]   - Generating certificates and keys ...
	I0226 11:57:57.746432   10808 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:57:57.746432   10808 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:57:57.746432   10808 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 11:57:57.746432   10808 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 11:57:57.746432   10808 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 11:57:57.746432   10808 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 11:57:57.747434   10808 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 11:57:57.747434   10808 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 11:57:57.747434   10808 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 11:57:57.747434   10808 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 11:57:57.747434   10808 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 11:57:57.748429   10808 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:57:57.748429   10808 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:57:57.748429   10808 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:57:57.748429   10808 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:57:57.748429   10808 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:57:57.748429   10808 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:57:57.751434   10808 out.go:204]   - Booting up control plane ...
	I0226 11:57:57.751434   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:57:57.751434   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:57:57.752454   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:57:57.752454   10808 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:57:57.752454   10808 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:57:57.752454   10808 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 11:57:57.752454   10808 kubeadm.go:322] 
	I0226 11:57:57.752454   10808 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 11:57:57.752454   10808 kubeadm.go:322] 	timed out waiting for the condition
	I0226 11:57:57.753429   10808 kubeadm.go:322] 
	I0226 11:57:57.753429   10808 kubeadm.go:322] This error is likely caused by:
	I0226 11:57:57.753429   10808 kubeadm.go:322] 	- The kubelet is not running
	I0226 11:57:57.753429   10808 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 11:57:57.753429   10808 kubeadm.go:322] 
	I0226 11:57:57.753429   10808 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 11:57:57.753429   10808 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 11:57:57.753429   10808 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 11:57:57.753429   10808 kubeadm.go:322] 
	I0226 11:57:57.754427   10808 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 11:57:57.754427   10808 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 11:57:57.754427   10808 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 11:57:57.754427   10808 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 11:57:57.754427   10808 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 11:57:57.755431   10808 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0226 11:57:57.755431   10808 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 11:57:57.755431   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 11:57:59.656848   10808 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.9014026s)
	I0226 11:57:59.671178   10808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:57:59.699625   10808 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:57:59.712671   10808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:57:59.729671   10808 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:57:59.729671   10808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:58:00.098698   10808 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 11:58:00.099651   10808 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 11:58:00.256214   10808 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 11:58:00.478017   10808 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 12:02:02.411780   10808 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 12:02:02.412124   10808 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 12:02:02.419498   10808 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 12:02:02.419498   10808 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 12:02:02.420047   10808 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 12:02:02.420163   10808 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 12:02:02.420163   10808 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 12:02:02.420801   10808 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 12:02:02.421076   10808 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 12:02:02.421178   10808 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 12:02:02.421395   10808 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 12:02:02.424697   10808 out.go:204]   - Generating certificates and keys ...
	I0226 12:02:02.425833   10808 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 12:02:02.426007   10808 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 12:02:02.426252   10808 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 12:02:02.426462   10808 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 12:02:02.426621   10808 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 12:02:02.426771   10808 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 12:02:02.426995   10808 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 12:02:02.427148   10808 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 12:02:02.427347   10808 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 12:02:02.427499   10808 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 12:02:02.427606   10808 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 12:02:02.427782   10808 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 12:02:02.427967   10808 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 12:02:02.428065   10808 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 12:02:02.430918   10808 out.go:204]   - Booting up control plane ...
	I0226 12:02:02.431350   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 12:02:02.432186   10808 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 12:02:02.432186   10808 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 12:02:02.432186   10808 kubeadm.go:322] 
	I0226 12:02:02.432486   10808 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 12:02:02.432535   10808 kubeadm.go:322] 	timed out waiting for the condition
	I0226 12:02:02.432624   10808 kubeadm.go:322] 
	I0226 12:02:02.432759   10808 kubeadm.go:322] This error is likely caused by:
	I0226 12:02:02.432855   10808 kubeadm.go:322] 	- The kubelet is not running
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 12:02:02.433010   10808 kubeadm.go:322] 
	I0226 12:02:02.433010   10808 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 12:02:02.433010   10808 kubeadm.go:322] 
	I0226 12:02:02.433539   10808 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 12:02:02.433913   10808 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 12:02:02.434165   10808 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 12:02:02.434297   10808 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 12:02:02.434487   10808 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 12:02:02.434487   10808 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 12:02:02.434860   10808 kubeadm.go:406] StartCluster complete in 12m33.1269006s
	I0226 12:02:02.442099   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 12:02:02.490569   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.490672   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 12:02:02.502331   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 12:02:02.541142   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.541142   10808 logs.go:278] No container was found matching "etcd"
	I0226 12:02:02.550354   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 12:02:02.587881   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.587881   10808 logs.go:278] No container was found matching "coredns"
	I0226 12:02:02.596635   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 12:02:02.635750   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.635846   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 12:02:02.636707   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 12:02:02.683458   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.683458   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 12:02:02.692816   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 12:02:02.730653   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.730653   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 12:02:02.739810   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 12:02:02.776933   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.776933   10808 logs.go:278] No container was found matching "kindnet"
	I0226 12:02:02.791523   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 12:02:02.829156   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.829359   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 12:02:02.829359   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 12:02:02.829359   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 12:02:02.873211   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:39 old-k8s-version-321200 kubelet[11360]: E0226 12:01:39.083157   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 12:02:02.878537   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.054143   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 12:02:02.879201   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.055459   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 12:02:02.898957   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:50 old-k8s-version-321200 kubelet[11360]: E0226 12:01:50.055188   11360 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 12:02:02.901305   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:51 old-k8s-version-321200 kubelet[11360]: E0226 12:01:51.050143   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 12:02:02.913165   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:56 old-k8s-version-321200 kubelet[11360]: E0226 12:01:56.056683   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 12:02:02.913165   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:56 old-k8s-version-321200 kubelet[11360]: E0226 12:01:56.058255   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 12:02:02.925848   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 12:02:02.925848   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 12:02:02.960790   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 12:02:02.960790   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 12:02:03.127567   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 12:02:03.127567   10808 logs.go:123] Gathering logs for Docker ...
	I0226 12:02:03.127567   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 12:02:03.180152   10808 logs.go:123] Gathering logs for container status ...
	I0226 12:02:03.180152   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0226 12:02:03.265121   10808 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 12:02:03.266022   10808 out.go:239] * 
	* 
	W0226 12:02:03.266022   10808 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 12:02:03.266022   10808 out.go:239] * 
	* 
	W0226 12:02:03.267771   10808 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 12:02:03.272116   10808 out.go:177] X Problems detected in kubelet:
	I0226 12:02:03.277200   10808 out.go:177]   Feb 26 12:01:39 old-k8s-version-321200 kubelet[11360]: E0226 12:01:39.083157   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0226 12:02:03.282640   10808 out.go:177]   Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.054143   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 12:02:03.288461   10808 out.go:177]   Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.055459   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 12:02:03.295212   10808 out.go:177] 
	W0226 12:02:03.297284   10808 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 12:02:03.297284   10808 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 12:02:03.297284   10808 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 12:02:03.301974   10808 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-321200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-321200
helpers_test.go:235: (dbg) docker inspect old-k8s-version-321200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242",
	        "Created": "2024-02-26T11:37:54.399460536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288204,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:48:44.943739959Z",
	            "FinishedAt": "2024-02-26T11:48:39.811543318Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hosts",
	        "LogPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242-json.log",
	        "Name": "/old-k8s-version-321200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-321200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-321200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-321200",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-321200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-321200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7222b3f3e5ce4dac0da549da99a03d584793c96ffcfcede00bdd92b38fae1e9",
	            "SandboxKey": "/var/run/docker/netns/d7222b3f3e5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54515"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54516"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54517"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54518"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54519"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-321200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e96dc767099",
	                        "old-k8s-version-321200"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "3d8e32e292076657fa3147b08ea4473653a270d339de0a1d187a6074718ce682",
	                    "EndpointID": "c20f02b95d19bda13eac592cade6848b5588ff57ea6c759d59d9a04f07452b51",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-321200",
	                        "9e96dc767099"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 2 (1.2569993s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 12:02:04.631180   13492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-321200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-321200 logs -n 25: (1.7033514s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | iptables -t nat -L -n -v                             |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl status kubelet --all                       |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat kubelet                                |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | journalctl -xeu kubelet --all                        |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl status docker --all                        |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat docker                                 |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/docker/daemon.json                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo docker                        | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | system info                                          |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl status cri-docker                          |                |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat cri-docker                             |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | cri-dockerd --version                                |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl status containerd                          |                |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat containerd                             |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /lib/systemd/system/containerd.service               |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/containerd/config.toml                          |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | containerd config dump                               |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC |                     |
	|         | systemctl status crio --all                          |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat crio --no-pager                        |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo find                          | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo crio                          | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | config                                               |                |                   |         |                     |                     |
	| delete  | -p kubenet-968100                                    | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:58:49
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:58:49.462776   11684 out.go:291] Setting OutFile to fd 1776 ...
	I0226 11:58:49.462776   11684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:58:49.462776   11684 out.go:304] Setting ErrFile to fd 2020...
	I0226 11:58:49.462776   11684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:58:49.486423   11684 out.go:298] Setting JSON to false
	I0226 11:58:49.493547   11684 start.go:129] hostinfo: {"hostname":"minikube7","uptime":6006,"bootTime":1708942723,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 11:58:49.493547   11684 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 11:58:49.498810   11684 out.go:177] * [kubenet-968100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 11:58:49.503131   11684 notify.go:220] Checking for updates...
	I0226 11:58:49.506715   11684 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:58:49.514044   11684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:58:49.519873   11684 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 11:58:49.533324   11684 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:58:49.537864   11684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:58:49.542834   11684 config.go:182] Loaded profile config "bridge-968100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:58:49.542834   11684 config.go:182] Loaded profile config "flannel-968100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:58:49.543839   11684 config.go:182] Loaded profile config "old-k8s-version-321200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:58:49.543839   11684 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:58:49.882500   11684 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 11:58:49.890519   11684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:58:50.312386   11684 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:58:50.264958915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:58:50.320392   11684 out.go:177] * Using the docker driver based on user configuration
	I0226 11:58:50.330385   11684 start.go:299] selected driver: docker
	I0226 11:58:50.330385   11684 start.go:903] validating driver "docker" against <nil>
	I0226 11:58:50.330385   11684 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:58:50.467384   11684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:58:50.877848   11684 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:58:50.832162825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:58:50.878772   11684 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:58:50.879946   11684 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:58:50.882708   11684 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 11:58:50.886506   11684 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0226 11:58:50.886575   11684 start_flags.go:323] config:
	{Name:kubenet-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-968100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:58:50.890996   11684 out.go:177] * Starting control plane node kubenet-968100 in cluster kubenet-968100
	I0226 11:58:50.896276   11684 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 11:58:50.902070   11684 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:58:50.909440   11684 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:58:50.909440   11684 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:58:50.909440   11684 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 11:58:50.909440   11684 cache.go:56] Caching tarball of preloaded images
	I0226 11:58:50.909440   11684 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 11:58:50.909440   11684 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0226 11:58:50.909440   11684 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\config.json ...
	I0226 11:58:50.910431   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\config.json: {Name:mk447734039250feafe4a6fa48e3612ca359a1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:58:51.125366   11684 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:58:51.125366   11684 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:58:51.125366   11684 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:58:51.125366   11684 start.go:365] acquiring machines lock for kubenet-968100: {Name:mk4d4f541c1002c737ff1cec6a45768ae16fec80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:58:51.126366   11684 start.go:369] acquired machines lock for "kubenet-968100" in 999.6µs
	I0226 11:58:51.126366   11684 start.go:93] Provisioning new machine with config: &{Name:kubenet-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-968100 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 11:58:51.126366   11684 start.go:125] createHost starting for "" (driver="docker")
	I0226 11:58:51.134358   11684 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0226 11:58:51.135387   11684 start.go:159] libmachine.API.Create for "kubenet-968100" (driver="docker")
	I0226 11:58:51.135387   11684 client.go:168] LocalClient.Create starting
	I0226 11:58:51.135387   11684 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0226 11:58:51.135387   11684 main.go:141] libmachine: Decoding PEM data...
	I0226 11:58:51.135387   11684 main.go:141] libmachine: Parsing certificate...
	I0226 11:58:51.136375   11684 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0226 11:58:51.136375   11684 main.go:141] libmachine: Decoding PEM data...
	I0226 11:58:51.136375   11684 main.go:141] libmachine: Parsing certificate...
	I0226 11:58:51.150362   11684 cli_runner.go:164] Run: docker network inspect kubenet-968100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 11:58:51.343550   11684 cli_runner.go:211] docker network inspect kubenet-968100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 11:58:51.352553   11684 network_create.go:281] running [docker network inspect kubenet-968100] to gather additional debugging logs...
	I0226 11:58:51.352553   11684 cli_runner.go:164] Run: docker network inspect kubenet-968100
	W0226 11:58:51.532691   11684 cli_runner.go:211] docker network inspect kubenet-968100 returned with exit code 1
	I0226 11:58:51.532691   11684 network_create.go:284] error running [docker network inspect kubenet-968100]: docker network inspect kubenet-968100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-968100 not found
	I0226 11:58:51.532691   11684 network_create.go:286] output of [docker network inspect kubenet-968100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-968100 not found
	
	** /stderr **
	I0226 11:58:51.542710   11684 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:58:51.762660   11684 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:58:51.787664   11684 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a3ec0}
	I0226 11:58:51.788672   11684 network_create.go:124] attempt to create docker network kubenet-968100 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0226 11:58:51.801915   11684 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-968100 kubenet-968100
	W0226 11:58:51.993135   11684 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-968100 kubenet-968100 returned with exit code 1
	W0226 11:58:51.993135   11684 network_create.go:149] failed to create docker network kubenet-968100 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-968100 kubenet-968100: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0226 11:58:51.993135   11684 network_create.go:116] failed to create docker network kubenet-968100 192.168.58.0/24, will retry: subnet is taken
	I0226 11:58:52.028123   11684 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:58:52.051065   11684 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002239ec0}
	I0226 11:58:52.051065   11684 network_create.go:124] attempt to create docker network kubenet-968100 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0226 11:58:52.060993   11684 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-968100 kubenet-968100
	I0226 11:58:52.401700   11684 network_create.go:108] docker network kubenet-968100 192.168.67.0/24 created
	I0226 11:58:52.401700   11684 kic.go:121] calculated static IP "192.168.67.2" for the "kubenet-968100" container
	I0226 11:58:52.429817   11684 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 11:58:52.629326   11684 cli_runner.go:164] Run: docker volume create kubenet-968100 --label name.minikube.sigs.k8s.io=kubenet-968100 --label created_by.minikube.sigs.k8s.io=true
	I0226 11:58:52.835254   11684 oci.go:103] Successfully created a docker volume kubenet-968100
	I0226 11:58:52.847112   11684 cli_runner.go:164] Run: docker run --rm --name kubenet-968100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-968100 --entrypoint /usr/bin/test -v kubenet-968100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 11:58:55.434970   11684 cli_runner.go:217] Completed: docker run --rm --name kubenet-968100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-968100 --entrypoint /usr/bin/test -v kubenet-968100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (2.5878382s)
	I0226 11:58:55.435971   11684 oci.go:107] Successfully prepared a docker volume kubenet-968100
	I0226 11:58:55.435971   11684 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:58:55.435971   11684 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 11:58:55.445968   11684 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-968100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 11:59:19.468057   11684 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-968100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (24.0219113s)
	I0226 11:59:19.475642   11684 kic.go:203] duration metric: took 24.039493 seconds to extract preloaded images to volume
	I0226 11:59:19.484643   11684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:59:19.857161   11684 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:90 SystemTime:2024-02-26 11:59:19.818461467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:59:19.869979   11684 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 11:59:20.243542   11684 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-968100 --name kubenet-968100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-968100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-968100 --network kubenet-968100 --ip 192.168.67.2 --volume kubenet-968100:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 11:59:22.135281   11684 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-968100 --name kubenet-968100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-968100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-968100 --network kubenet-968100 --ip 192.168.67.2 --volume kubenet-968100:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf: (1.8907097s)
	I0226 11:59:22.147281   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Running}}
	I0226 11:59:22.355740   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 11:59:22.525700   11684 cli_runner.go:164] Run: docker exec kubenet-968100 stat /var/lib/dpkg/alternatives/iptables
	I0226 11:59:22.816214   11684 oci.go:144] the created container "kubenet-968100" has a running status.
	I0226 11:59:22.816214   11684 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa...
	I0226 11:59:23.024213   11684 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 11:59:23.567345   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 11:59:23.804327   11684 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 11:59:23.804327   11684 kic_runner.go:114] Args: [docker exec --privileged kubenet-968100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 11:59:24.110802   11684 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa...
	I0226 11:59:27.017532   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 11:59:27.209192   11684 machine.go:88] provisioning docker machine ...
	I0226 11:59:27.209192   11684 ubuntu.go:169] provisioning hostname "kubenet-968100"
	I0226 11:59:27.217186   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:27.433469   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:27.444745   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:27.444745   11684 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubenet-968100 && echo "kubenet-968100" | sudo tee /etc/hostname
	I0226 11:59:27.680961   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-968100
	
	I0226 11:59:27.692100   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:27.890314   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:27.891293   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:27.891293   11684 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-968100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-968100/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-968100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 11:59:28.103109   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 11:59:28.103261   11684 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0226 11:59:28.103261   11684 ubuntu.go:177] setting up certificates
	I0226 11:59:28.103261   11684 provision.go:83] configureAuth start
	I0226 11:59:28.117214   11684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-968100
	I0226 11:59:28.322297   11684 provision.go:138] copyHostCerts
	I0226 11:59:28.323358   11684 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0226 11:59:28.323358   11684 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0226 11:59:28.324115   11684 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0226 11:59:28.325658   11684 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0226 11:59:28.325754   11684 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0226 11:59:28.326133   11684 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0226 11:59:28.327417   11684 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0226 11:59:28.327417   11684 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0226 11:59:28.328106   11684 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0226 11:59:28.329676   11684 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-968100 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-968100]
	I0226 11:59:28.584753   11684 provision.go:172] copyRemoteCerts
	I0226 11:59:28.601991   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 11:59:28.612757   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:28.788916   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:28.941271   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0226 11:59:28.999437   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0226 11:59:29.065616   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 11:59:29.120202   11684 provision.go:86] duration metric: configureAuth took 1.0168767s
	I0226 11:59:29.120258   11684 ubuntu.go:193] setting minikube options for container-runtime
	I0226 11:59:29.120926   11684 config.go:182] Loaded profile config "kubenet-968100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:59:29.137779   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:29.337368   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:29.338361   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:29.338361   11684 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 11:59:29.534297   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 11:59:29.534297   11684 ubuntu.go:71] root file system type: overlay
	I0226 11:59:29.534297   11684 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 11:59:29.551278   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:29.732038   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:29.732038   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:29.732038   11684 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 11:59:29.955964   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 11:59:29.967185   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:30.156050   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:30.156439   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:30.156439   11684 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 11:59:32.081855   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-26 11:59:29.943107196 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0226 11:59:32.081855   11684 machine.go:91] provisioned docker machine in 4.8726267s
	I0226 11:59:32.081855   11684 client.go:171] LocalClient.Create took 40.9461651s
	I0226 11:59:32.081855   11684 start.go:167] duration metric: libmachine.API.Create for "kubenet-968100" took 40.9461651s
	I0226 11:59:32.081855   11684 start.go:300] post-start starting for "kubenet-968100" (driver="docker")
	I0226 11:59:32.081855   11684 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 11:59:32.094850   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 11:59:32.104850   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:32.283589   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:32.449860   11684 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 11:59:32.462064   11684 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 11:59:32.462064   11684 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 11:59:32.462064   11684 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 11:59:32.462064   11684 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 11:59:32.462064   11684 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0226 11:59:32.462064   11684 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0226 11:59:32.463062   11684 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> 118682.pem in /etc/ssl/certs
	I0226 11:59:32.476060   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 11:59:32.520389   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /etc/ssl/certs/118682.pem (1708 bytes)
	I0226 11:59:32.590619   11684 start.go:303] post-start completed in 508.7601ms
	I0226 11:59:32.605621   11684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-968100
	I0226 11:59:32.817425   11684 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\config.json ...
	I0226 11:59:32.836756   11684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:59:32.844752   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:33.048505   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:33.188521   11684 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 11:59:33.201507   11684 start.go:128] duration metric: createHost completed in 42.0748295s
	I0226 11:59:33.201507   11684 start.go:83] releasing machines lock for "kubenet-968100", held for 42.0748295s
	I0226 11:59:33.210537   11684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-968100
	I0226 11:59:33.413848   11684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 11:59:33.422753   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:33.423742   11684 ssh_runner.go:195] Run: cat /version.json
	I0226 11:59:33.431741   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:33.613974   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:33.626972   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:33.991964   11684 ssh_runner.go:195] Run: systemctl --version
	I0226 11:59:34.024458   11684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 11:59:34.059268   11684 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0226 11:59:34.079195   11684 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0226 11:59:34.093193   11684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 11:59:34.173421   11684 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0226 11:59:34.173421   11684 start.go:475] detecting cgroup driver to use...
	I0226 11:59:34.173421   11684 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:59:34.173873   11684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:59:34.216452   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0226 11:59:34.246782   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 11:59:34.275336   11684 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 11:59:34.287821   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 11:59:34.320149   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:59:34.352353   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 11:59:34.398474   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:59:34.431275   11684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 11:59:34.469281   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 11:59:34.586786   11684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 11:59:34.687642   11684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 11:59:34.719877   11684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:59:35.003749   11684 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 11:59:35.143232   11684 start.go:475] detecting cgroup driver to use...
	I0226 11:59:35.143347   11684 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:59:35.164107   11684 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 11:59:35.196859   11684 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 11:59:35.208472   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 11:59:35.236459   11684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:59:35.310241   11684 ssh_runner.go:195] Run: which cri-dockerd
	I0226 11:59:35.332243   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 11:59:35.361165   11684 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (193 bytes)
	I0226 11:59:35.420146   11684 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 11:59:35.634748   11684 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 11:59:35.777039   11684 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 11:59:35.777254   11684 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 11:59:35.828157   11684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:59:35.981088   11684 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:59:36.715732   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0226 11:59:36.971473   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 11:59:37.010790   11684 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0226 11:59:37.207519   11684 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0226 11:59:37.375246   11684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:59:37.530464   11684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0226 11:59:37.568762   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 11:59:37.606256   11684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:59:37.761173   11684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0226 11:59:37.932692   11684 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0226 11:59:37.946224   11684 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0226 11:59:37.958373   11684 start.go:543] Will wait 60s for crictl version
	I0226 11:59:37.978346   11684 ssh_runner.go:195] Run: which crictl
	I0226 11:59:38.001282   11684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 11:59:38.107823   11684 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0226 11:59:38.117383   11684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:59:38.192471   11684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:59:38.242072   11684 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0226 11:59:38.252438   11684 cli_runner.go:164] Run: docker exec -t kubenet-968100 dig +short host.docker.internal
	I0226 11:59:38.507517   11684 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 11:59:38.523561   11684 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 11:59:38.538099   11684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:59:38.568506   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:38.766723   11684 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:59:38.776638   11684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:59:38.819588   11684 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 11:59:38.819588   11684 docker.go:615] Images already preloaded, skipping extraction
	I0226 11:59:38.829701   11684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:59:38.880713   11684 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 11:59:38.880713   11684 cache_images.go:84] Images are preloaded, skipping loading
	I0226 11:59:38.893595   11684 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 11:59:39.007223   11684 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0226 11:59:39.007223   11684 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:59:39.007223   11684 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-968100 NodeName:kubenet-968100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 11:59:39.007223   11684 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-968100"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:59:39.007223   11684 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubenet-968100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kubenet-968100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:59:39.020587   11684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 11:59:39.041934   11684 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:59:39.054628   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:59:39.075875   11684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes)
	I0226 11:59:39.106085   11684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 11:59:39.136497   11684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0226 11:59:39.182369   11684 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:59:39.196951   11684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:59:39.215790   11684 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100 for IP: 192.168.67.2
	I0226 11:59:39.215790   11684 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.216597   11684 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 11:59:39.216867   11684 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0226 11:59:39.217555   11684 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.key
	I0226 11:59:39.217671   11684 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt with IP's: []
	I0226 11:59:39.486729   11684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt ...
	I0226 11:59:39.486729   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: {Name:mk0c4c5b5f6bf83cc7f3221d74996d34e7e9722c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.488240   11684 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.key ...
	I0226 11:59:39.488240   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.key: {Name:mkbba140057490f59c3bf6f4aab1ab4141707741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.488578   11684 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key.c7fa3a9e
	I0226 11:59:39.489601   11684 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 11:59:39.574808   11684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt.c7fa3a9e ...
	I0226 11:59:39.574808   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt.c7fa3a9e: {Name:mke3e48a6ec59f2fb3fc7f0a538c8a0fd45851f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.576759   11684 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key.c7fa3a9e ...
	I0226 11:59:39.576759   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key.c7fa3a9e: {Name:mkf5ad641787e565fa40ed23ba72170388f003f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.578279   11684 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt
	I0226 11:59:39.587286   11684 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key
	I0226 11:59:39.587964   11684 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.key
	I0226 11:59:39.589003   11684 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.crt with IP's: []
	I0226 11:59:39.753658   11684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.crt ...
	I0226 11:59:39.753658   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.crt: {Name:mk70f1435fcc5d980ede8ca3f74b6fbaacaeb591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.754682   11684 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.key ...
	I0226 11:59:39.754682   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.key: {Name:mk4c7d5ad5f032a84d020dd948a761163af3cbcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.765460   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 11:59:39.765460   11684 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 11:59:39.765460   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 11:59:39.766506   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 11:59:39.766506   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 11:59:39.766506   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 11:59:39.766506   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 11:59:39.768500   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:59:39.809947   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 11:59:39.856185   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:59:39.895390   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 11:59:39.937313   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:59:39.978269   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:59:40.019062   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:59:40.066354   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:59:40.105811   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:59:40.150210   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 11:59:40.190172   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 11:59:40.230523   11684 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:59:40.272735   11684 ssh_runner.go:195] Run: openssl version
	I0226 11:59:40.296477   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:59:40.327587   11684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:59:40.338489   11684 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:59:40.350183   11684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:59:40.377607   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:59:40.409835   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 11:59:40.439708   11684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 11:59:40.451398   11684 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 11:59:40.464412   11684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 11:59:40.488359   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 11:59:40.517952   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 11:59:40.548360   11684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 11:59:40.559034   11684 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 11:59:40.572549   11684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 11:59:40.601241   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 11:59:40.633143   11684 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:59:40.643786   11684 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 11:59:40.644047   11684 kubeadm.go:404] StartCluster: {Name:kubenet-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-968100 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:59:40.653403   11684 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 11:59:40.706236   11684 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:59:40.738787   11684 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 11:59:40.758174   11684 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:59:40.768691   11684 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:59:40.787307   11684 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:59:40.787374   11684 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:59:40.958429   11684 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0226 11:59:41.108408   11684 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:59:57.415748   11684 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0226 11:59:57.415886   11684 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:59:57.416131   11684 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:59:57.416131   11684 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:59:57.416131   11684 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:59:57.416909   11684 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:59:57.418960   11684 out.go:204]   - Generating certificates and keys ...
	I0226 11:59:57.419850   11684 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:59:57.419928   11684 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:59:57.419928   11684 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 11:59:57.419928   11684 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 11:59:57.420567   11684 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 11:59:57.420640   11684 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 11:59:57.420768   11684 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 11:59:57.420969   11684 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubenet-968100 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 11:59:57.421211   11684 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 11:59:57.421565   11684 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubenet-968100 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 11:59:57.421565   11684 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 11:59:57.421565   11684 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 11:59:57.421565   11684 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 11:59:57.422121   11684 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:59:57.422184   11684 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:59:57.422350   11684 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:59:57.422701   11684 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:59:57.422828   11684 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:59:57.423020   11684 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:59:57.423278   11684 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:59:57.425919   11684 out.go:204]   - Booting up control plane ...
	I0226 11:59:57.426215   11684 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:59:57.426445   11684 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:59:57.426593   11684 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:59:57.426593   11684 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:59:57.427171   11684 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:59:57.427391   11684 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 11:59:57.427507   11684 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:59:57.427507   11684 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.505149 seconds
	I0226 11:59:57.428057   11684 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0226 11:59:57.428057   11684 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0226 11:59:57.428057   11684 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0226 11:59:57.428794   11684 kubeadm.go:322] [mark-control-plane] Marking the node kubenet-968100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0226 11:59:57.428794   11684 kubeadm.go:322] [bootstrap-token] Using token: 2nstr1.vudotf4rhd8prt3r
	I0226 11:59:57.433580   11684 out.go:204]   - Configuring RBAC rules ...
	I0226 11:59:57.433865   11684 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0226 11:59:57.433865   11684 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0226 11:59:57.433865   11684 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0226 11:59:57.434813   11684 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0226 11:59:57.435152   11684 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0226 11:59:57.435152   11684 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0226 11:59:57.435152   11684 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0226 11:59:57.435712   11684 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0226 11:59:57.435932   11684 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0226 11:59:57.435932   11684 kubeadm.go:322] 
	I0226 11:59:57.435932   11684 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0226 11:59:57.435932   11684 kubeadm.go:322] 
	I0226 11:59:57.435932   11684 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0226 11:59:57.435932   11684 kubeadm.go:322] 
	I0226 11:59:57.435932   11684 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0226 11:59:57.436468   11684 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0226 11:59:57.436587   11684 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0226 11:59:57.436587   11684 kubeadm.go:322] 
	I0226 11:59:57.436587   11684 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0226 11:59:57.436587   11684 kubeadm.go:322] 
	I0226 11:59:57.436587   11684 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0226 11:59:57.436587   11684 kubeadm.go:322] 
	I0226 11:59:57.437158   11684 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0226 11:59:57.437304   11684 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0226 11:59:57.437304   11684 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0226 11:59:57.437470   11684 kubeadm.go:322] 
	I0226 11:59:57.437671   11684 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0226 11:59:57.437981   11684 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0226 11:59:57.437981   11684 kubeadm.go:322] 
	I0226 11:59:57.437981   11684 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2nstr1.vudotf4rhd8prt3r \
	I0226 11:59:57.437981   11684 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:692c5187086ebf6703a77455376cf1d08082795eb601b6b948c6cdfd3f4f8e8d \
	I0226 11:59:57.438588   11684 kubeadm.go:322] 	--control-plane 
	I0226 11:59:57.438588   11684 kubeadm.go:322] 
	I0226 11:59:57.438588   11684 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0226 11:59:57.438588   11684 kubeadm.go:322] 
	I0226 11:59:57.438588   11684 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2nstr1.vudotf4rhd8prt3r \
	I0226 11:59:57.439233   11684 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:692c5187086ebf6703a77455376cf1d08082795eb601b6b948c6cdfd3f4f8e8d 
	I0226 11:59:57.439233   11684 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0226 11:59:57.439233   11684 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 11:59:57.457199   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6 minikube.k8s.io/name=kubenet-968100 minikube.k8s.io/updated_at=2024_02_26T11_59_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:57.458727   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:57.478022   11684 ops.go:34] apiserver oom_adj: -16
	I0226 11:59:58.095212   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:58.601901   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:59.112511   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:59.601732   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:00.092331   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:00.596749   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:01.100947   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:01.605414   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:02.098988   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:02.601091   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:03.094554   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:03.601664   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:04.108587   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:04.594260   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:05.101399   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:05.605636   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:06.094766   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:06.599985   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:07.092311   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:07.594463   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:08.094947   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:08.598990   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:09.107874   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:09.599780   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:09.804019   11684 kubeadm.go:1088] duration metric: took 12.3646948s to wait for elevateKubeSystemPrivileges.
	I0226 12:00:09.804019   11684 kubeadm.go:406] StartCluster complete in 29.1597569s
	I0226 12:00:09.804019   11684 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:00:09.804711   11684 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 12:00:09.806536   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:00:09.808582   11684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 12:00:09.808679   11684 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 12:00:09.808801   11684 addons.go:69] Setting storage-provisioner=true in profile "kubenet-968100"
	I0226 12:00:09.808801   11684 addons.go:234] Setting addon storage-provisioner=true in "kubenet-968100"
	I0226 12:00:09.808801   11684 addons.go:69] Setting default-storageclass=true in profile "kubenet-968100"
	I0226 12:00:09.808801   11684 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-968100"
	I0226 12:00:09.808801   11684 host.go:66] Checking if "kubenet-968100" exists ...
	I0226 12:00:09.808801   11684 config.go:182] Loaded profile config "kubenet-968100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 12:00:09.834322   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 12:00:09.837013   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 12:00:10.028525   11684 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 12:00:10.031125   11684 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 12:00:10.031125   11684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 12:00:10.044822   11684 addons.go:234] Setting addon default-storageclass=true in "kubenet-968100"
	I0226 12:00:10.044883   11684 host.go:66] Checking if "kubenet-968100" exists ...
	I0226 12:00:10.044883   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 12:00:10.072331   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 12:00:10.229991   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 12:00:10.265380   11684 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 12:00:10.265380   11684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 12:00:10.280518   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 12:00:10.426779   11684 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubenet-968100" context rescaled to 1 replicas
	I0226 12:00:10.426779   11684 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 12:00:10.430766   11684 out.go:177] * Verifying Kubernetes components...
	I0226 12:00:10.446754   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:00:10.456914   11684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0226 12:00:10.463837   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 12:00:10.491297   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-968100
	I0226 12:00:10.600727   11684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 12:00:10.684315   11684 node_ready.go:35] waiting up to 15m0s for node "kubenet-968100" to be "Ready" ...
	I0226 12:00:10.756874   11684 node_ready.go:49] node "kubenet-968100" has status "Ready":"True"
	I0226 12:00:10.756919   11684 node_ready.go:38] duration metric: took 72.5286ms waiting for node "kubenet-968100" to be "Ready" ...
	I0226 12:00:10.756995   11684 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:00:10.781104   11684 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-8d5fm" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:10.894572   11684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 12:00:12.969804   11684 pod_ready.go:102] pod "coredns-5dd5756b68-8d5fm" in "kube-system" namespace has status "Ready":"False"
	I0226 12:00:14.270352   11684 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.8134105s)
	I0226 12:00:14.270490   11684 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0226 12:00:14.558441   11684 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.9576848s)
	I0226 12:00:14.558441   11684 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.6636543s)
	I0226 12:00:14.586624   11684 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0226 12:00:14.588699   11684 addons.go:505] enable addons completed in 4.7799843s: enabled=[storage-provisioner default-storageclass]
	I0226 12:00:14.805460   11684 pod_ready.go:92] pod "coredns-5dd5756b68-8d5fm" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:14.805513   11684 pod_ready.go:81] duration metric: took 4.0242069s waiting for pod "coredns-5dd5756b68-8d5fm" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:14.805513   11684 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:16.834722   11684 pod_ready.go:102] pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status "Ready":"False"
	I0226 12:00:19.335948   11684 pod_ready.go:102] pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status "Ready":"False"
	I0226 12:00:21.834406   11684 pod_ready.go:102] pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status "Ready":"False"
	I0226 12:00:24.329289   11684 pod_ready.go:97] pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-26 12:00:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSta
teTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-26 12:00:13 +0000 UTC,FinishedAt:2024-02-26 12:00:23 +0000 UTC,ContainerID:docker://2d270b7df5d60f7d8d3750b0e300d791d8a6bfb53604ead42a8531753edbe410,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://2d270b7df5d60f7d8d3750b0e300d791d8a6bfb53604ead42a8531753edbe410 Started:0xc002bb4ad0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0226 12:00:24.329289   11684 pod_ready.go:81] duration metric: took 9.5237055s waiting for pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace to be "Ready" ...
	E0226 12:00:24.329289   11684 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-26 12:00:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running
:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-26 12:00:13 +0000 UTC,FinishedAt:2024-02-26 12:00:23 +0000 UTC,ContainerID:docker://2d270b7df5d60f7d8d3750b0e300d791d8a6bfb53604ead42a8531753edbe410,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://2d270b7df5d60f7d8d3750b0e300d791d8a6bfb53604ead42a8531753edbe410 Started:0xc002bb4ad0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0226 12:00:24.329289   11684 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.346720   11684 pod_ready.go:92] pod "etcd-kubenet-968100" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.346720   11684 pod_ready.go:81] duration metric: took 17.2377ms waiting for pod "etcd-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.346720   11684 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.360255   11684 pod_ready.go:92] pod "kube-apiserver-kubenet-968100" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.360255   11684 pod_ready.go:81] duration metric: took 13.535ms waiting for pod "kube-apiserver-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.360255   11684 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.375744   11684 pod_ready.go:92] pod "kube-controller-manager-kubenet-968100" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.375815   11684 pod_ready.go:81] duration metric: took 15.4889ms waiting for pod "kube-controller-manager-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.375815   11684 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-mz5j7" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.401167   11684 pod_ready.go:92] pod "kube-proxy-mz5j7" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.401167   11684 pod_ready.go:81] duration metric: took 25.3516ms waiting for pod "kube-proxy-mz5j7" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.401722   11684 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.722691   11684 pod_ready.go:92] pod "kube-scheduler-kubenet-968100" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.722691   11684 pod_ready.go:81] duration metric: took 320.9674ms waiting for pod "kube-scheduler-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.722691   11684 pod_ready.go:38] duration metric: took 13.9655934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:00:24.722786   11684 api_server.go:52] waiting for apiserver process to appear ...
	I0226 12:00:24.736874   11684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 12:00:24.765340   11684 api_server.go:72] duration metric: took 14.3384547s to wait for apiserver process to appear ...
	I0226 12:00:24.765375   11684 api_server.go:88] waiting for apiserver healthz status ...
	I0226 12:00:24.765415   11684 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55861/healthz ...
	I0226 12:00:24.787487   11684 api_server.go:279] https://127.0.0.1:55861/healthz returned 200:
	ok
	I0226 12:00:24.793024   11684 api_server.go:141] control plane version: v1.28.4
	I0226 12:00:24.793109   11684 api_server.go:131] duration metric: took 27.694ms to wait for apiserver health ...
	I0226 12:00:24.793109   11684 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 12:00:24.942541   11684 system_pods.go:59] 7 kube-system pods found
	I0226 12:00:24.942541   11684 system_pods.go:61] "coredns-5dd5756b68-8d5fm" [44abce95-9488-4f8d-b4f7-3957a218aee2] Running
	I0226 12:00:24.942624   11684 system_pods.go:61] "etcd-kubenet-968100" [615298a9-cf9c-4996-9bef-9d73dd60c158] Running
	I0226 12:00:24.942624   11684 system_pods.go:61] "kube-apiserver-kubenet-968100" [42272ac0-93d3-4cd6-ba29-5b8391251399] Running
	I0226 12:00:24.942624   11684 system_pods.go:61] "kube-controller-manager-kubenet-968100" [66df9813-8c0f-4139-a21b-44fb8f435401] Running
	I0226 12:00:24.942624   11684 system_pods.go:61] "kube-proxy-mz5j7" [08d21e16-bfd3-435b-b021-dc6a157c5527] Running
	I0226 12:00:24.942728   11684 system_pods.go:61] "kube-scheduler-kubenet-968100" [fea9e51e-6f10-4817-9ded-7e4c809359a6] Running
	I0226 12:00:24.942728   11684 system_pods.go:61] "storage-provisioner" [960a8845-f6dc-4d2a-8647-2ec83adf88de] Running
	I0226 12:00:24.942793   11684 system_pods.go:74] duration metric: took 149.6238ms to wait for pod list to return data ...
	I0226 12:00:24.942821   11684 default_sa.go:34] waiting for default service account to be created ...
	I0226 12:00:25.123320   11684 default_sa.go:45] found service account: "default"
	I0226 12:00:25.123854   11684 default_sa.go:55] duration metric: took 180.969ms for default service account to be created ...
	I0226 12:00:25.123854   11684 system_pods.go:116] waiting for k8s-apps to be running ...
	I0226 12:00:25.334693   11684 system_pods.go:86] 7 kube-system pods found
	I0226 12:00:25.334693   11684 system_pods.go:89] "coredns-5dd5756b68-8d5fm" [44abce95-9488-4f8d-b4f7-3957a218aee2] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "etcd-kubenet-968100" [615298a9-cf9c-4996-9bef-9d73dd60c158] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "kube-apiserver-kubenet-968100" [42272ac0-93d3-4cd6-ba29-5b8391251399] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "kube-controller-manager-kubenet-968100" [66df9813-8c0f-4139-a21b-44fb8f435401] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "kube-proxy-mz5j7" [08d21e16-bfd3-435b-b021-dc6a157c5527] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "kube-scheduler-kubenet-968100" [fea9e51e-6f10-4817-9ded-7e4c809359a6] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "storage-provisioner" [960a8845-f6dc-4d2a-8647-2ec83adf88de] Running
	I0226 12:00:25.334693   11684 system_pods.go:126] duration metric: took 210.8376ms to wait for k8s-apps to be running ...
	I0226 12:00:25.334693   11684 system_svc.go:44] waiting for kubelet service to be running ....
	I0226 12:00:25.345514   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:00:25.373071   11684 system_svc.go:56] duration metric: took 38.3777ms WaitForService to wait for kubelet.
	I0226 12:00:25.373071   11684 kubeadm.go:581] duration metric: took 14.9461813s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0226 12:00:25.373071   11684 node_conditions.go:102] verifying NodePressure condition ...
	I0226 12:00:25.534679   11684 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0226 12:00:25.534679   11684 node_conditions.go:123] node cpu capacity is 16
	I0226 12:00:25.534679   11684 node_conditions.go:105] duration metric: took 161.6068ms to run NodePressure ...
	I0226 12:00:25.535223   11684 start.go:228] waiting for startup goroutines ...
	I0226 12:00:25.535223   11684 start.go:233] waiting for cluster config update ...
	I0226 12:00:25.535223   11684 start.go:242] writing updated cluster config ...
	I0226 12:00:25.546875   11684 ssh_runner.go:195] Run: rm -f paused
	I0226 12:00:25.681754   11684 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0226 12:00:25.687316   11684 out.go:177] * Done! kubectl is now configured to use "kubenet-968100" cluster and "default" namespace by default
	I0226 12:02:02.411780   10808 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 12:02:02.412124   10808 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 12:02:02.419498   10808 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 12:02:02.419498   10808 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 12:02:02.420047   10808 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 12:02:02.420163   10808 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 12:02:02.420163   10808 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 12:02:02.420801   10808 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 12:02:02.421076   10808 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 12:02:02.421178   10808 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 12:02:02.421395   10808 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 12:02:02.424697   10808 out.go:204]   - Generating certificates and keys ...
	I0226 12:02:02.425833   10808 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 12:02:02.426007   10808 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 12:02:02.426252   10808 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 12:02:02.426462   10808 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 12:02:02.426621   10808 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 12:02:02.426771   10808 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 12:02:02.426995   10808 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 12:02:02.427148   10808 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 12:02:02.427347   10808 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 12:02:02.427499   10808 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 12:02:02.427606   10808 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 12:02:02.427782   10808 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 12:02:02.427967   10808 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 12:02:02.428065   10808 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 12:02:02.430918   10808 out.go:204]   - Booting up control plane ...
	I0226 12:02:02.431350   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 12:02:02.432186   10808 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 12:02:02.432186   10808 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 12:02:02.432186   10808 kubeadm.go:322] 
	I0226 12:02:02.432486   10808 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 12:02:02.432535   10808 kubeadm.go:322] 	timed out waiting for the condition
	I0226 12:02:02.432624   10808 kubeadm.go:322] 
	I0226 12:02:02.432759   10808 kubeadm.go:322] This error is likely caused by:
	I0226 12:02:02.432855   10808 kubeadm.go:322] 	- The kubelet is not running
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 12:02:02.433010   10808 kubeadm.go:322] 
	I0226 12:02:02.433010   10808 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 12:02:02.433010   10808 kubeadm.go:322] 
	I0226 12:02:02.433539   10808 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 12:02:02.433913   10808 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 12:02:02.434165   10808 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 12:02:02.434297   10808 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 12:02:02.434487   10808 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 12:02:02.434487   10808 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 12:02:02.434860   10808 kubeadm.go:406] StartCluster complete in 12m33.1269006s
	I0226 12:02:02.442099   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 12:02:02.490569   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.490672   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 12:02:02.502331   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 12:02:02.541142   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.541142   10808 logs.go:278] No container was found matching "etcd"
	I0226 12:02:02.550354   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 12:02:02.587881   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.587881   10808 logs.go:278] No container was found matching "coredns"
	I0226 12:02:02.596635   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 12:02:02.635750   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.635846   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 12:02:02.636707   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 12:02:02.683458   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.683458   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 12:02:02.692816   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 12:02:02.730653   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.730653   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 12:02:02.739810   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 12:02:02.776933   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.776933   10808 logs.go:278] No container was found matching "kindnet"
	I0226 12:02:02.791523   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 12:02:02.829156   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.829359   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 12:02:02.829359   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 12:02:02.829359   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 12:02:02.873211   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:39 old-k8s-version-321200 kubelet[11360]: E0226 12:01:39.083157   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 12:02:02.878537   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.054143   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 12:02:02.879201   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.055459   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 12:02:02.898957   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:50 old-k8s-version-321200 kubelet[11360]: E0226 12:01:50.055188   11360 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 12:02:02.901305   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:51 old-k8s-version-321200 kubelet[11360]: E0226 12:01:51.050143   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 12:02:02.913165   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:56 old-k8s-version-321200 kubelet[11360]: E0226 12:01:56.056683   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 12:02:02.913165   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:56 old-k8s-version-321200 kubelet[11360]: E0226 12:01:56.058255   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 12:02:02.925848   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 12:02:02.925848   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 12:02:02.960790   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 12:02:02.960790   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 12:02:03.127567   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 12:02:03.127567   10808 logs.go:123] Gathering logs for Docker ...
	I0226 12:02:03.127567   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 12:02:03.180152   10808 logs.go:123] Gathering logs for container status ...
	I0226 12:02:03.180152   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0226 12:02:03.265121   10808 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 12:02:03.266022   10808 out.go:239] * 
	W0226 12:02:03.266022   10808 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 12:02:03.266022   10808 out.go:239] * 
	W0226 12:02:03.267771   10808 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 12:02:03.272116   10808 out.go:177] X Problems detected in kubelet:
	I0226 12:02:03.277200   10808 out.go:177]   Feb 26 12:01:39 old-k8s-version-321200 kubelet[11360]: E0226 12:01:39.083157   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0226 12:02:03.282640   10808 out.go:177]   Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.054143   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 12:02:03.288461   10808 out.go:177]   Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.055459   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 12:02:03.295212   10808 out.go:177] 
	W0226 12:02:03.297284   10808 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 12:02:03.297284   10808 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 12:02:03.297284   10808 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 12:02:03.301974   10808 out.go:177] 
	
	
	==> Docker <==
	Feb 26 11:49:19 old-k8s-version-321200 systemd[1]: docker.service: Deactivated successfully.
	Feb 26 11:49:19 old-k8s-version-321200 systemd[1]: Stopped Docker Application Container Engine.
	Feb 26 11:49:19 old-k8s-version-321200 systemd[1]: Starting Docker Application Container Engine...
	Feb 26 11:49:19 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:19.396489498Z" level=info msg="Starting up"
	Feb 26 11:49:20 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:20.314342121Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 26 11:49:25 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:25.524607951Z" level=info msg="Loading containers: start."
	Feb 26 11:49:25 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:25.902105685Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.009629742Z" level=info msg="Loading containers: done."
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165501852Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165622656Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165636856Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165644257Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165670758Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165728759Z" level=info msg="Daemon has completed initialization"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.235451801Z" level=info msg="API listen on [::]:2376"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.235470401Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 26 11:49:26 old-k8s-version-321200 systemd[1]: Started Docker Application Container Engine.
	Feb 26 11:53:52 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:53:52.698379318Z" level=info msg="ignoring event" container=22eef693a3b25b970c3f15b213dae642250f97f419baab1b95e305257b0bf337 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:53:53 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:53:53.154075076Z" level=info msg="ignoring event" container=9f64d56ec69fdb95b0a13228a04c2b050bed331d09cfc2f97ba7579f488e520e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:53:53 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:53:53.609017011Z" level=info msg="ignoring event" container=3973f89914b8e77a02331ea5c13ddc208683027d2e5256a9e3d8bfe136978f77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:53:53 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:53:53.963497669Z" level=info msg="ignoring event" container=8f606e19a68cee783b718f9730ffa0d7ab6495fc67bdbba4a348ca1e4e2ab259 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:57:58 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:57:58.547150857Z" level=info msg="ignoring event" container=60471072de846312b8f561a812967c588c28956c65d9e28c8ae15470fcf390d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:57:59 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:57:59.052701708Z" level=info msg="ignoring event" container=0e678e59ced845ab74f0a63cd8d32af6aac57e6129e64fbcb469871b01b46009 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:57:59 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:57:59.309964423Z" level=info msg="ignoring event" container=d4761dcb7c564c98bf5f4a32e7c2b23e0c8ceaec1b2771b6decea1c8e45b8fb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:57:59 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:57:59.525063459Z" level=info msg="ignoring event" container=def1fa160e71067194fb930c3284b0eac9ba724960317fa85b3f262024ce625c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb26 11:48] hrtimer: interrupt took 2869333 ns
	
	
	==> kernel <==
	 12:02:07 up  1:42,  0 users,  load average: 0.91, 4.00, 4.87
	Linux old-k8s-version-321200 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.064880   11360 kuberuntime_manager.go:783] container start failed: ImageInspectError: Failed to inspect image "k8s.gcr.io/kube-apiserver:v1.16.0": Id or size of image "k8s.gcr.io/kube-apiserver:v1.16.0" is not set
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.064988   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.086065   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.097073   11360 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.186790   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.287722   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.310481   11360 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-321200&limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.388583   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.489409   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.504890   11360 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-321200&limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.590152   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: I0226 12:02:06.681978   11360 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.690954   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.698596   11360 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: I0226 12:02:06.720741   11360 kubelet_node_status.go:72] Attempting to register node old-k8s-version-321200
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.791648   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.880862   11360 kubelet_node_status.go:94] Unable to register node "old-k8s-version-321200" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.892145   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:06 old-k8s-version-321200 kubelet[11360]: E0226 12:02:06.992995   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:07 old-k8s-version-321200 kubelet[11360]: E0226 12:02:07.080615   11360 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:02:07 old-k8s-version-321200 kubelet[11360]: E0226 12:02:07.093970   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:07 old-k8s-version-321200 kubelet[11360]: E0226 12:02:07.194934   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:07 old-k8s-version-321200 kubelet[11360]: E0226 12:02:07.280927   11360 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:02:07 old-k8s-version-321200 kubelet[11360]: E0226 12:02:07.295454   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:02:07 old-k8s-version-321200 kubelet[11360]: E0226 12:02:07.396037   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 12:02:05.862265    6648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-321200 -n old-k8s-version-321200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 2 (1.2265143s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 12:02:07.998199    1864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-321200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (807.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (39.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-336100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-336100 --alsologtostderr -v=1: exit status 80 (5.0007713s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-336100 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:49:10.157237    9092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 11:49:10.280236    9092 out.go:291] Setting OutFile to fd 1968 ...
	I0226 11:49:10.280236    9092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:49:10.280236    9092 out.go:304] Setting ErrFile to fd 1652...
	I0226 11:49:10.281251    9092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:49:10.309254    9092 out.go:298] Setting JSON to false
	I0226 11:49:10.309254    9092 mustload.go:65] Loading cluster: default-k8s-diff-port-336100
	I0226 11:49:10.313322    9092 config.go:182] Loaded profile config "default-k8s-diff-port-336100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:49:10.345250    9092 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336100 --format={{.State.Status}}
	I0226 11:49:10.609803    9092 host.go:66] Checking if "default-k8s-diff-port-336100" exists ...
	I0226 11:49:10.625812    9092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-336100
	I0226 11:49:10.866764    9092 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/17936/minikube-v1.32.1-1708020063-17936-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.32.1-1708020063-17936/minikube-v1.32.1-1708020063-17936-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.32.1-1708020063-17936-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: m
axauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube7:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-336100 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%
!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0226 11:49:10.874746    9092 out.go:177] * Pausing node default-k8s-diff-port-336100 ... 
	I0226 11:49:10.882750    9092 host.go:66] Checking if "default-k8s-diff-port-336100" exists ...
	I0226 11:49:10.901742    9092 ssh_runner.go:195] Run: systemctl --version
	I0226 11:49:10.917751    9092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336100
	I0226 11:49:11.151946    9092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54283 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\default-k8s-diff-port-336100\id_rsa Username:docker}
	I0226 11:49:11.388956    9092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:49:11.410951    9092 pause.go:51] kubelet running: true
	I0226 11:49:11.425955    9092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0226 11:49:11.843758    9092 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0226 11:49:11.896081    9092 docker.go:500] Pausing containers: [025f65d455b3 b9bac999fd7d 45dc9bcb6abb 9b58ef900d13 ed641ac2dd84 a350137fd8cc 128ab6b6902d e3796456c31f fb6ee3715508 10f60aa72eef f88a17167e77 36f9099cc403 f30be2596771 71f8ddd2d1ce 85ea568887df 40c261516b8a 1f13081873a7 4bdbf2ecbaa9]
	I0226 11:49:11.907081    9092 ssh_runner.go:195] Run: docker pause 025f65d455b3 b9bac999fd7d 45dc9bcb6abb 9b58ef900d13 ed641ac2dd84 a350137fd8cc 128ab6b6902d e3796456c31f fb6ee3715508 10f60aa72eef f88a17167e77 36f9099cc403 f30be2596771 71f8ddd2d1ce 85ea568887df 40c261516b8a 1f13081873a7 4bdbf2ecbaa9
	I0226 11:49:14.816616    9092 ssh_runner.go:235] Completed: docker pause 025f65d455b3 b9bac999fd7d 45dc9bcb6abb 9b58ef900d13 ed641ac2dd84 a350137fd8cc 128ab6b6902d e3796456c31f fb6ee3715508 10f60aa72eef f88a17167e77 36f9099cc403 f30be2596771 71f8ddd2d1ce 85ea568887df 40c261516b8a 1f13081873a7 4bdbf2ecbaa9: (2.9085069s)
	I0226 11:49:14.824632    9092 out.go:177] 
	W0226 11:49:14.828592    9092 out.go:239] X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause 025f65d455b3 b9bac999fd7d 45dc9bcb6abb 9b58ef900d13 ed641ac2dd84 a350137fd8cc 128ab6b6902d e3796456c31f fb6ee3715508 10f60aa72eef f88a17167e77 36f9099cc403 f30be2596771 71f8ddd2d1ce 85ea568887df 40c261516b8a 1f13081873a7 4bdbf2ecbaa9: Process exited with status 1
	stdout:
	025f65d455b3
	b9bac999fd7d
	45dc9bcb6abb
	9b58ef900d13
	ed641ac2dd84
	a350137fd8cc
	128ab6b6902d
	e3796456c31f
	fb6ee3715508
	10f60aa72eef
	f88a17167e77
	f30be2596771
	71f8ddd2d1ce
	85ea568887df
	40c261516b8a
	1f13081873a7
	4bdbf2ecbaa9
	
	stderr:
	Error response from daemon: cannot pause container 36f9099cc403060dbf46272d2479ef74b611d60766f7a0da5b42e3ad41e0153d: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause 025f65d455b3 b9bac999fd7d 45dc9bcb6abb 9b58ef900d13 ed641ac2dd84 a350137fd8cc 128ab6b6902d e3796456c31f fb6ee3715508 10f60aa72eef f88a17167e77 36f9099cc403 f30be2596771 71f8ddd2d1ce 85ea568887df 40c261516b8a 1f13081873a7 4bdbf2ecbaa9: Process exited with status 1
	stdout:
	025f65d455b3
	b9bac999fd7d
	45dc9bcb6abb
	9b58ef900d13
	ed641ac2dd84
	a350137fd8cc
	128ab6b6902d
	e3796456c31f
	fb6ee3715508
	10f60aa72eef
	f88a17167e77
	f30be2596771
	71f8ddd2d1ce
	85ea568887df
	40c261516b8a
	1f13081873a7
	4bdbf2ecbaa9
	
	stderr:
	Error response from daemon: cannot pause container 36f9099cc403060dbf46272d2479ef74b611d60766f7a0da5b42e3ad41e0153d: OCI runtime pause failed: unable to freeze: unknown
	
	W0226 11:49:14.828592    9092 out.go:239] * 
	* 
	W0226 11:49:14.942594    9092 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_pause_8a34b101973a5475dd3f2895f630b939c2202307_7.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_pause_8a34b101973a5475dd3f2895f630b939c2202307_7.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 11:49:14.948599    9092 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-336100 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-336100
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-336100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3",
	        "Created": "2024-02-26T11:41:04.097888482Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:42:43.702971111Z",
	            "FinishedAt": "2024-02-26T11:42:39.390055524Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3/hosts",
	        "LogPath": "/var/lib/docker/containers/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3-json.log",
	        "Name": "/default-k8s-diff-port-336100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-336100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-336100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d8308c5dc6904a38239a63158a4954fa491ba3056ddc8df973f6db0b743577f1-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8308c5dc6904a38239a63158a4954fa491ba3056ddc8df973f6db0b743577f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8308c5dc6904a38239a63158a4954fa491ba3056ddc8df973f6db0b743577f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8308c5dc6904a38239a63158a4954fa491ba3056ddc8df973f6db0b743577f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-336100",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-336100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-336100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-336100",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-336100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70c1d1bbcb3f4fe67c5335ef03c16ee4e11efdbbea343c7cbf60650885e572de",
	            "SandboxKey": "/var/run/docker/netns/70c1d1bbcb3f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54283"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54284"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54285"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54286"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54287"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-336100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dc1783eb6323",
	                        "default-k8s-diff-port-336100"
	                    ],
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "8ab5fb28410c8674c42cbee3241ecadd46423b5f521a9025f751cf381c550194",
	                    "EndpointID": "974d6b559c55e5f3145ac9765d41787096b52bf59c933d02cf33bda892d95185",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "default-k8s-diff-port-336100",
	                        "dc1783eb6323"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100: exit status 2 (1.4124785s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:49:15.372845    3844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-336100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-336100 logs -n 25: (14.8873761s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-755000                 | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:48 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-336100  | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:42 UTC |
	|         | default-k8s-diff-port-336100                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-336100       | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:48 UTC |
	|         | default-k8s-diff-port-336100                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-321200        | old-k8s-version-321200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| image   | no-preload-279800 image list                           | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-279800                                   | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p no-preload-279800                                   | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p no-preload-279800                                   | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	| delete  | -p no-preload-279800                                   | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	| start   | -p newest-cni-571300 --memory=2200 --alsologtostderr   | newest-cni-571300            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.29.0-rc.2      |                              |                   |         |                     |                     |
	| stop    | -p old-k8s-version-321200                              | old-k8s-version-321200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-321200             | old-k8s-version-321200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p old-k8s-version-321200                              | old-k8s-version-321200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                              |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |                   |         |                     |                     |
	|         | --keep-context=false                                   |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |                   |         |                     |                     |
	| image   | embed-certs-755000 image list                          | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:49 UTC |
	| delete  | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	| start   | -p auto-968100 --memory=3072                           | auto-968100                  | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	| image   | default-k8s-diff-port-336100                           | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-336100                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:49:06
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:49:06.729447      32 out.go:291] Setting OutFile to fd 1836 ...
	I0226 11:49:06.730447      32 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:49:06.730447      32 out.go:304] Setting ErrFile to fd 2036...
	I0226 11:49:06.730447      32 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:49:06.760426      32 out.go:298] Setting JSON to false
	I0226 11:49:06.764431      32 start.go:129] hostinfo: {"hostname":"minikube7","uptime":5423,"bootTime":1708942723,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 11:49:06.765425      32 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 11:49:06.769425      32 out.go:177] * [auto-968100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 11:49:06.775435      32 notify.go:220] Checking for updates...
	I0226 11:49:06.779433      32 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:49:06.784433      32 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:49:06.791422      32 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 11:49:06.798459      32 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:49:06.803441      32 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:49:06.809424      32 config.go:182] Loaded profile config "default-k8s-diff-port-336100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:49:06.809424      32 config.go:182] Loaded profile config "newest-cni-571300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 11:49:06.810432      32 config.go:182] Loaded profile config "old-k8s-version-321200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:49:06.810432      32 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:49:07.188434      32 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 11:49:07.204455      32 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:49:07.682799      32 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:49:07.638279587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:49:07.686767      32 out.go:177] * Using the docker driver based on user configuration
	I0226 11:49:07.692780      32 start.go:299] selected driver: docker
	I0226 11:49:07.692780      32 start.go:903] validating driver "docker" against <nil>
	I0226 11:49:07.692780      32 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:49:07.787802      32 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:49:08.255774      32 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:49:08.197369964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:49:08.255774      32 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:49:08.256770      32 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:49:08.260767      32 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 11:49:08.264770      32 cni.go:84] Creating CNI manager for ""
	I0226 11:49:08.264770      32 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 11:49:08.264770      32 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0226 11:49:08.264770      32 start_flags.go:323] config:
	{Name:auto-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-968100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:49:08.274820      32 out.go:177] * Starting control plane node auto-968100 in cluster auto-968100
	I0226 11:49:08.278788      32 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 11:49:08.282776      32 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:49:08.288800      32 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:49:08.288800      32 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:49:08.288800      32 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 11:49:08.288800      32 cache.go:56] Caching tarball of preloaded images
	I0226 11:49:08.288800      32 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 11:49:08.288800      32 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0226 11:49:08.289771      32 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\config.json ...
	I0226 11:49:08.289771      32 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\config.json: {Name:mk9988f5dd5d27595a7fddc854aab47e6accdd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:49:08.512788      32 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:49:08.512788      32 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:49:08.512788      32 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:49:08.512788      32 start.go:365] acquiring machines lock for auto-968100: {Name:mkc42eef878ef2f28860fc95f93b3960dadab276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:49:08.512788      32 start.go:369] acquired machines lock for "auto-968100" in 0s
	I0226 11:49:08.512788      32 start.go:93] Provisioning new machine with config: &{Name:auto-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-968100 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 11:49:08.512788      32 start.go:125] createHost starting for "" (driver="docker")
	I0226 11:49:09.993218    8212 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0226 11:49:09.993218    8212 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:49:09.994247    8212 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:49:09.994247    8212 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:49:09.994247    8212 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:49:09.994247    8212 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:49:10.000206    8212 out.go:204]   - Generating certificates and keys ...
	I0226 11:49:10.001216    8212 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:49:10.001216    8212 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:49:10.001216    8212 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 11:49:10.001216    8212 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 11:49:10.002211    8212 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 11:49:10.002211    8212 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 11:49:10.002211    8212 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 11:49:10.003228    8212 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-571300] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0226 11:49:10.003228    8212 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 11:49:10.003228    8212 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-571300] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0226 11:49:10.003228    8212 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 11:49:10.004227    8212 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 11:49:10.004227    8212 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 11:49:10.004227    8212 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:49:10.004227    8212 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:49:10.004227    8212 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0226 11:49:10.004227    8212 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:49:10.005210    8212 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:49:10.005210    8212 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:49:10.005210    8212 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:49:10.005210    8212 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:49:10.009212    8212 out.go:204]   - Booting up control plane ...
	I0226 11:49:10.009212    8212 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:49:10.009212    8212 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:49:10.009212    8212 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:49:10.010231    8212 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:49:10.010231    8212 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:49:10.010231    8212 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 11:49:10.010231    8212 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:49:10.011226    8212 kubeadm.go:322] [apiclient] All control plane components are healthy after 14.507596 seconds
	I0226 11:49:10.011226    8212 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0226 11:49:10.011226    8212 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0226 11:49:10.012225    8212 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0226 11:49:10.012225    8212 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-571300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0226 11:49:10.012225    8212 kubeadm.go:322] [bootstrap-token] Using token: k6p57a.7mk8dmm9zmh6wzrn
	I0226 11:49:10.019260    8212 out.go:204]   - Configuring RBAC rules ...
	I0226 11:49:10.019260    8212 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0226 11:49:10.019260    8212 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0226 11:49:10.020204    8212 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0226 11:49:10.020204    8212 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0226 11:49:10.020204    8212 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0226 11:49:10.020204    8212 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0226 11:49:10.021232    8212 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0226 11:49:10.021232    8212 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0226 11:49:10.021232    8212 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0226 11:49:10.021232    8212 kubeadm.go:322] 
	I0226 11:49:10.021232    8212 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0226 11:49:10.021232    8212 kubeadm.go:322] 
	I0226 11:49:10.021232    8212 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0226 11:49:10.022238    8212 kubeadm.go:322] 
	I0226 11:49:10.022238    8212 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0226 11:49:10.022238    8212 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0226 11:49:10.022238    8212 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0226 11:49:10.022238    8212 kubeadm.go:322] 
	I0226 11:49:10.022238    8212 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0226 11:49:10.022238    8212 kubeadm.go:322] 
	I0226 11:49:10.023232    8212 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0226 11:49:10.023232    8212 kubeadm.go:322] 
	I0226 11:49:10.023232    8212 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0226 11:49:10.023232    8212 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0226 11:49:10.023232    8212 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0226 11:49:10.023232    8212 kubeadm.go:322] 
	I0226 11:49:10.024216    8212 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0226 11:49:10.024216    8212 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0226 11:49:10.024216    8212 kubeadm.go:322] 
	I0226 11:49:10.024216    8212 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k6p57a.7mk8dmm9zmh6wzrn \
	I0226 11:49:10.025219    8212 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:692c5187086ebf6703a77455376cf1d08082795eb601b6b948c6cdfd3f4f8e8d \
	I0226 11:49:10.025219    8212 kubeadm.go:322] 	--control-plane 
	I0226 11:49:10.025219    8212 kubeadm.go:322] 
	I0226 11:49:10.025219    8212 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0226 11:49:10.025219    8212 kubeadm.go:322] 
	I0226 11:49:10.025219    8212 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k6p57a.7mk8dmm9zmh6wzrn \
	I0226 11:49:10.026308    8212 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:692c5187086ebf6703a77455376cf1d08082795eb601b6b948c6cdfd3f4f8e8d 
	I0226 11:49:10.026308    8212 cni.go:84] Creating CNI manager for ""
	I0226 11:49:10.026308    8212 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 11:49:10.032214    8212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0226 11:49:08.519791      32 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0226 11:49:08.519791      32 start.go:159] libmachine.API.Create for "auto-968100" (driver="docker")
	I0226 11:49:08.520773      32 client.go:168] LocalClient.Create starting
	I0226 11:49:08.521806      32 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0226 11:49:08.521806      32 main.go:141] libmachine: Decoding PEM data...
	I0226 11:49:08.521806      32 main.go:141] libmachine: Parsing certificate...
	I0226 11:49:08.521806      32 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0226 11:49:08.522781      32 main.go:141] libmachine: Decoding PEM data...
	I0226 11:49:08.522781      32 main.go:141] libmachine: Parsing certificate...
	I0226 11:49:08.542797      32 cli_runner.go:164] Run: docker network inspect auto-968100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 11:49:08.785771      32 cli_runner.go:211] docker network inspect auto-968100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 11:49:08.796777      32 network_create.go:281] running [docker network inspect auto-968100] to gather additional debugging logs...
	I0226 11:49:08.796777      32 cli_runner.go:164] Run: docker network inspect auto-968100
	W0226 11:49:09.024780      32 cli_runner.go:211] docker network inspect auto-968100 returned with exit code 1
	I0226 11:49:09.024780      32 network_create.go:284] error running [docker network inspect auto-968100]: docker network inspect auto-968100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-968100 not found
	I0226 11:49:09.024780      32 network_create.go:286] output of [docker network inspect auto-968100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-968100 not found
	
	** /stderr **
	I0226 11:49:09.038791      32 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:49:09.318810      32 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.350776      32 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.381773      32 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.428780      32 network.go:210] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.460807      32 network.go:207] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f31d0}
	I0226 11:49:09.460807      32 network_create.go:124] attempt to create docker network auto-968100 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0226 11:49:09.479825      32 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-968100 auto-968100
	W0226 11:49:09.738218      32 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-968100 auto-968100 returned with exit code 1
	W0226 11:49:09.738218      32 network_create.go:149] failed to create docker network auto-968100 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-968100 auto-968100: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0226 11:49:09.738218      32 network_create.go:116] failed to create docker network auto-968100 192.168.85.0/24, will retry: subnet is taken
	I0226 11:49:09.778227      32 network.go:210] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.811330      32 network.go:207] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00240c960}
	I0226 11:49:09.811330      32 network_create.go:124] attempt to create docker network auto-968100 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0226 11:49:09.822201      32 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-968100 auto-968100
	I0226 11:49:10.199249      32 network_create.go:108] docker network auto-968100 192.168.94.0/24 created
	I0226 11:49:10.199249      32 kic.go:121] calculated static IP "192.168.94.2" for the "auto-968100" container
	I0226 11:49:10.228241      32 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 11:49:10.486254      32 cli_runner.go:164] Run: docker volume create auto-968100 --label name.minikube.sigs.k8s.io=auto-968100 --label created_by.minikube.sigs.k8s.io=true
	I0226 11:49:10.736807      32 oci.go:103] Successfully created a docker volume auto-968100
	I0226 11:49:10.750820      32 cli_runner.go:164] Run: docker run --rm --name auto-968100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-968100 --entrypoint /usr/bin/test -v auto-968100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 11:49:10.054415    8212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0226 11:49:10.091218    8212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0226 11:49:10.268252    8212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 11:49:10.295248    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:10.296258    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6 minikube.k8s.io/name=newest-cni-571300 minikube.k8s.io/updated_at=2024_02_26T11_49_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:10.302249    8212 ops.go:34] apiserver oom_adj: -16
	I0226 11:49:10.838810    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:13.959181   10808 docker.go:649] Took 15.826555 seconds to copy over tarball
	I0226 11:49:13.972530   10808 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> Docker <==
	Feb 26 11:47:55 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:47:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/40c261516b8ac5d520f24abe265b7b2a5e8835b21917909f39387e91bdc774b2/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:48:25 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/10f60aa72eef7e8d7784e141e9fc713b4186a73258ec7f3e4fcd522ca94e4b16/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:48:25 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c31578b82eeb3bfb4a9d89f7618baefb552a2a3931a9144982093d1b10bc5fac/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:48:26 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fb6ee37155082429e6f4c536ffb9d992d900eb5532eb820a15734845d2969641/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:48:30 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Feb 26 11:48:31 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a350137fd8cc6d8fc27766d3f802804ae526f732cc0cb6ffae0d5a48d3769809/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:48:32 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ed641ac2dd84f690d35417e0779a14fc89bb7d4440bd9f5fe5cfaf01da951880/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 26 11:48:32 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:32.797527367Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Feb 26 11:48:32 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:32.797698274Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Feb 26 11:48:32 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:32.859795883Z" level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Feb 26 11:48:33 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/45dc9bcb6abb9a7631143a8207ba9aaf2a67676ab13fa9bc6d7d963bda776b58/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 26 11:48:33 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9bac999fd7d6b3d7cce54b3a8f45722a964880064fb7d428118c71bd57f04be/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 26 11:48:34 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:34.099987675Z" level=info msg="ignoring event" container=0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:48:34 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:34.168056621Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=cbf5992a9096f203 traceID=ed61dcc0c019266d096562dadbad75b2
	Feb 26 11:48:34 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:34.581566127Z" level=info msg="ignoring event" container=c31578b82eeb3bfb4a9d89f7618baefb552a2a3931a9144982093d1b10bc5fac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:48:44 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:44Z" level=info msg="Pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: ee3247c7e545: Extracting [======================>                            ]  34.54MB/75.78MB"
	Feb 26 11:48:54 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:54Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Feb 26 11:48:54 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:54.382708386Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=a6a2c44b9ea81091 traceID=16079db72e4ac8c8935161d6ef04f995
	Feb 26 11:48:55 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:55.268174387Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Feb 26 11:48:55 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:55.268450990Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest version 2, schema 1 support will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format, or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Feb 26 11:49:05 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:49:05Z" level=info msg="Pulling image registry.k8s.io/echoserver:1.4: 412c0feed608: Extracting [>                                                  ]  294.9kB/27.02MB"
	Feb 26 11:49:12 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:49:12.444392157Z" level=error msg="Handler for POST /v1.44/containers/36f9099cc403/pause returned error: cannot pause container 36f9099cc403060dbf46272d2479ef74b611d60766f7a0da5b42e3ad41e0153d: OCI runtime pause failed: unable to freeze: unknown"
	Feb 26 11:49:15 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:49:15Z" level=info msg="Pulling image registry.k8s.io/echoserver:1.4: d3c51dabc842: Extracting [==================================================>]     172B/172B"
	Feb 26 11:49:18 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:49:18Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: Status: Downloaded newer image for registry.k8s.io/echoserver:1.4"
	Feb 26 11:49:18 default-k8s-diff-port-336100 cri-dockerd[1222]: W0226 11:49:18.573716    1222 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	025f65d455b3f       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   26 seconds ago       Running             kubernetes-dashboard      0                   45dc9bcb6abb9       kubernetes-dashboard-8694d4445c-ws57n
	9b58ef900d133       6e38f40d628db                                                                                    48 seconds ago       Running             storage-provisioner       0                   a350137fd8cc6       storage-provisioner
	128ab6b6902de       ead0a4a53df89                                                                                    54 seconds ago       Running             coredns                   0                   fb6ee37155082       coredns-5dd5756b68-6dzcq
	e3796456c31fc       83f6cc407eed8                                                                                    55 seconds ago       Running             kube-proxy                0                   10f60aa72eef7       kube-proxy-wn86k
	f88a17167e777       d058aa5ab969c                                                                                    About a minute ago   Running             kube-controller-manager   0                   40c261516b8ac       kube-controller-manager-default-k8s-diff-port-336100
	36f9099cc4030       73deb9a3f7025                                                                                    About a minute ago   Running             etcd                      0                   4bdbf2ecbaa92       etcd-default-k8s-diff-port-336100
	f30be25967718       e3db313c6dbc0                                                                                    About a minute ago   Running             kube-scheduler            0                   85ea568887df4       kube-scheduler-default-k8s-diff-port-336100
	71f8ddd2d1ceb       7fe0e6f37db33                                                                                    About a minute ago   Running             kube-apiserver            0                   1f13081873a70       kube-apiserver-default-k8s-diff-port-336100
	
	
	==> coredns [128ab6b6902d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39021 - 46909 "HINFO IN 9218697660716973466.1102664807043899096. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.179612512s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[Feb26 11:48] hrtimer: interrupt took 2869333 ns
	
	
	==> etcd [36f9099cc403] <==
	{"level":"warn","ts":"2024-02-26T11:48:47.272509Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-26T11:48:46.591718Z","time spent":"680.786028ms","remote":"127.0.0.1:35494","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-02-26T11:48:47.272704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.201082ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-jvkjt.17b767410b462ca7\" ","response":"range_response_count:1 size:839"}
	{"level":"info","ts":"2024-02-26T11:48:47.272837Z","caller":"traceutil/trace.go:171","msg":"trace[1960303670] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-jvkjt.17b767410b462ca7; range_end:; response_count:1; response_revision:557; }","duration":"179.330485ms","start":"2024-02-26T11:48:47.093426Z","end":"2024-02-26T11:48:47.272757Z","steps":["trace[1960303670] 'agreement among raft nodes before linearized reading'  (duration: 179.149281ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:48:47.533626Z","caller":"traceutil/trace.go:171","msg":"trace[1772795827] linearizableReadLoop","detail":"{readStateIndex:577; appliedIndex:576; }","duration":"259.114157ms","start":"2024-02-26T11:48:47.274484Z","end":"2024-02-26T11:48:47.533598Z","steps":["trace[1772795827] 'read index received'  (duration: 255.889286ms)","trace[1772795827] 'applied index is now lower than readState.Index'  (duration: 3.223771ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-26T11:48:47.533825Z","caller":"traceutil/trace.go:171","msg":"trace[1484794696] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"259.632269ms","start":"2024-02-26T11:48:47.274178Z","end":"2024-02-26T11:48:47.533811Z","steps":["trace[1484794696] 'process raft request'  (duration: 256.180992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:48:47.534001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.679671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-26T11:48:47.534058Z","caller":"traceutil/trace.go:171","msg":"trace[736565086] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:558; }","duration":"259.751572ms","start":"2024-02-26T11:48:47.274291Z","end":"2024-02-26T11:48:47.534043Z","steps":["trace[736565086] 'agreement among raft nodes before linearized reading'  (duration: 259.557668ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:48:47.568252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.261817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"warn","ts":"2024-02-26T11:48:47.568293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.03849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:2 size:7558"}
	{"level":"info","ts":"2024-02-26T11:48:47.568319Z","caller":"traceutil/trace.go:171","msg":"trace[1078504284] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:559; }","duration":"203.335719ms","start":"2024-02-26T11:48:47.364967Z","end":"2024-02-26T11:48:47.568303Z","steps":["trace[1078504284] 'agreement among raft nodes before linearized reading'  (duration: 203.105713ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:48:47.56835Z","caller":"traceutil/trace.go:171","msg":"trace[888186011] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:2; response_revision:559; }","duration":"193.100991ms","start":"2024-02-26T11:48:47.375234Z","end":"2024-02-26T11:48:47.568335Z","steps":["trace[888186011] 'agreement among raft nodes before linearized reading'  (duration: 192.927487ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:48:47.568315Z","caller":"traceutil/trace.go:171","msg":"trace[1118518390] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"292.525ms","start":"2024-02-26T11:48:47.275771Z","end":"2024-02-26T11:48:47.568296Z","steps":["trace[1118518390] 'process raft request'  (duration: 292.028689ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:48:47.708192Z","caller":"traceutil/trace.go:171","msg":"trace[1122958620] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"130.748806ms","start":"2024-02-26T11:48:47.577342Z","end":"2024-02-26T11:48:47.708091Z","steps":["trace[1122958620] 'process raft request'  (duration: 61.485567ms)","trace[1122958620] 'compare'  (duration: 68.85073ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-26T11:48:53.921958Z","caller":"traceutil/trace.go:171","msg":"trace[351637160] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"145.11184ms","start":"2024-02-26T11:48:53.776818Z","end":"2024-02-26T11:48:53.92193Z","steps":["trace[351637160] 'process raft request'  (duration: 144.926438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:48:54.893724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"525.333778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:2 size:7558"}
	{"level":"info","ts":"2024-02-26T11:48:54.893961Z","caller":"traceutil/trace.go:171","msg":"trace[866541400] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:2; response_revision:566; }","duration":"525.577681ms","start":"2024-02-26T11:48:54.368361Z","end":"2024-02-26T11:48:54.893939Z","steps":["trace[866541400] 'range keys from in-memory index tree'  (duration: 525.135875ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:48:54.894184Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-26T11:48:54.368339Z","time spent":"525.724082ms","remote":"127.0.0.1:35666","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":2,"response size":7581,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	WARNING: 2024/02/26 11:49:11 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-02-26T11:49:12.565552Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13873776142822305911,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-02-26T11:49:12.99027Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.454352514s","expected-duration":"1s"}
	{"level":"warn","ts":"2024-02-26T11:49:13.656769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.412586ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873776142822305913 > lease_revoke:<id:40898de53e367025>","response":"size:28"}
	{"level":"info","ts":"2024-02-26T11:49:13.656978Z","caller":"traceutil/trace.go:171","msg":"trace[406868893] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:610; }","duration":"1.591706658s","start":"2024-02-26T11:49:12.065259Z","end":"2024-02-26T11:49:13.656965Z","steps":["trace[406868893] 'read index received'  (duration: 925.987974ms)","trace[406868893] 'applied index is now lower than readState.Index'  (duration: 665.716784ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-26T11:49:13.657043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.59180626s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-26T11:49:13.657064Z","caller":"traceutil/trace.go:171","msg":"trace[949421801] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:588; }","duration":"1.591836961s","start":"2024-02-26T11:49:12.065219Z","end":"2024-02-26T11:49:13.657056Z","steps":["trace[949421801] 'agreement among raft nodes before linearized reading'  (duration: 1.59178346s)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:49:13.657085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-26T11:49:12.065204Z","time spent":"1.591874861s","remote":"127.0.0.1:35606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":28,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	
	
	==> kernel <==
	 11:49:30 up  1:30,  0 users,  load average: 9.86, 6.43, 4.98
	Linux default-k8s-diff-port-336100 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [71f8ddd2d1ce] <==
	E0226 11:48:30.567421       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W0226 11:48:30.599779       1 handler_proxy.go:93] no RequestInfo found in the context
	E0226 11:48:30.599927       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0226 11:48:30.599944       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0226 11:48:30.600112       1 handler_proxy.go:93] no RequestInfo found in the context
	E0226 11:48:30.600405       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0226 11:48:30.601143       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0226 11:48:32.905152       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.208.152"}
	I0226 11:48:33.106266       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.227.124"}
	I0226 11:48:35.520402       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0226 11:48:35.521776       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0226 11:48:47.535469       1 trace.go:236] Trace[1370258729]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.103.2,type:*v1.Endpoints,resource:apiServerIPInfo (26-Feb-2024 11:48:46.589) (total time: 946ms):
	Trace[1370258729]: ---"Transaction prepared" 682ms (11:48:47.273)
	Trace[1370258729]: ---"Txn call completed" 261ms (11:48:47.535)
	Trace[1370258729]: [946.150625ms] [946.150625ms] END
	I0226 11:48:54.896780       1 trace.go:236] Trace[1367126895]: "List" accept:application/json, */*,audit-id:31084a52-ce21-4c65-9f75-6779e9d521ec,client:192.168.103.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (26-Feb-2024 11:48:54.367) (total time: 529ms):
	Trace[1367126895]: ["List(recursive=true) etcd3" audit-id:31084a52-ce21-4c65-9f75-6779e9d521ec,key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: 529ms (11:48:54.367)]
	Trace[1367126895]: [529.584523ms] [529.584523ms] END
	I0226 11:49:00.887237       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0226 11:49:11.785616       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 109.802µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0226 11:49:11.785682       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0226 11:49:11.787224       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0226 11:49:11.787390       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0226 11:49:11.788706       1 timeout.go:142] post-timeout activity - time-elapsed: 3.131564ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-diff-port-336100" result: <nil>
	
	
	==> kube-controller-manager [f88a17167e77] <==
	E0226 11:48:32.000588       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0226 11:48:32.000596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.337655ms"
	E0226 11:48:32.000836       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" failed with pods "dashboard-metrics-scraper-5f989dc9cf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0226 11:48:32.001383       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0226 11:48:32.001467       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f989dc9cf-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0226 11:48:32.190350       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-ws57n"
	I0226 11:48:32.206597       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-7z4nr"
	I0226 11:48:32.268235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="185.94758ms"
	I0226 11:48:32.268235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.404974ms"
	I0226 11:48:32.484297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="215.845298ms"
	I0226 11:48:32.575623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="306.837208ms"
	I0226 11:48:32.768970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="284.419393ms"
	I0226 11:48:32.769118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="88.904µs"
	I0226 11:48:32.770676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="194.890044ms"
	I0226 11:48:32.770902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.203µs"
	I0226 11:48:33.995091       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="81.702µs"
	I0226 11:48:34.789541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.301µs"
	I0226 11:48:35.068532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.401µs"
	I0226 11:48:35.191474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.401µs"
	I0226 11:48:35.227037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.102µs"
	I0226 11:48:35.321738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="70.1µs"
	E0226 11:48:51.689475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0226 11:48:52.093768       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0226 11:48:56.632312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="33.351454ms"
	I0226 11:48:56.632591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.901µs"
	
	
	==> kube-proxy [e3796456c31f] <==
	I0226 11:48:26.904784       1 server_others.go:69] "Using iptables proxy"
	I0226 11:48:26.972827       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0226 11:48:27.092371       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0226 11:48:27.097953       1 server_others.go:152] "Using iptables Proxier"
	I0226 11:48:27.098009       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0226 11:48:27.098033       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0226 11:48:27.098078       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0226 11:48:27.098885       1 server.go:846] "Version info" version="v1.28.4"
	I0226 11:48:27.098901       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 11:48:27.101265       1 config.go:188] "Starting service config controller"
	I0226 11:48:27.101311       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0226 11:48:27.101349       1 config.go:97] "Starting endpoint slice config controller"
	I0226 11:48:27.101365       1 config.go:315] "Starting node config controller"
	I0226 11:48:27.101376       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0226 11:48:27.101376       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0226 11:48:27.266955       1 shared_informer.go:318] Caches are synced for node config
	I0226 11:48:27.267010       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0226 11:48:27.267019       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [f30be2596771] <==
	W0226 11:48:02.475804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0226 11:48:02.475952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0226 11:48:02.538826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0226 11:48:02.538929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0226 11:48:02.542074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0226 11:48:02.542191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0226 11:48:02.615027       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0226 11:48:02.615080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0226 11:48:02.634637       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0226 11:48:02.634760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0226 11:48:02.658717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0226 11:48:02.658825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0226 11:48:02.720005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0226 11:48:02.720140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0226 11:48:02.795056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0226 11:48:02.795106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0226 11:48:02.813397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0226 11:48:02.813519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0226 11:48:02.869045       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0226 11:48:02.869244       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0226 11:48:02.869978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0226 11:48:02.870010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0226 11:48:05.771847       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0226 11:48:05.771898       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0226 11:48:11.079466       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:32.373744    9261 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed641ac2dd84f690d35417e0779a14fc89bb7d4440bd9f5fe5cfaf01da951880"
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:32.867825    9261 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:32.868255    9261 kuberuntime_image.go:53] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:32.869647    9261 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gpddq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-jvkjt_kube-system(1b16d8a8-d350-4790-a653-2eb996a85ab1): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:32.869815    9261 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-jvkjt" podUID="1b16d8a8-d350-4790-a653-2eb996a85ab1"
	Feb 26 11:48:33 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:33.869226    9261 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9bac999fd7d6b3d7cce54b3a8f45722a964880064fb7d428118c71bd57f04be"
	Feb 26 11:48:33 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:33.903770    9261 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45dc9bcb6abb9a7631143a8207ba9aaf2a67676ab13fa9bc6d7d963bda776b58"
	Feb 26 11:48:33 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:33.967570    9261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.967502671 podCreationTimestamp="2024-02-26 11:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-26 11:48:33.967364368 +0000 UTC m=+24.383499206" watchObservedRunningTime="2024-02-26 11:48:33.967502671 +0000 UTC m=+24.383637409"
	Feb 26 11:48:33 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:33.975424    9261 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jvkjt" podUID="1b16d8a8-d350-4790-a653-2eb996a85ab1"
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.796682    9261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b484w\" (UniqueName: \"kubernetes.io/projected/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-kube-api-access-b484w\") pod \"b9fab7ec-fcd3-454c-b3aa-a0cef2112f52\" (UID: \"b9fab7ec-fcd3-454c-b3aa-a0cef2112f52\") "
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.796897    9261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-config-volume\") pod \"b9fab7ec-fcd3-454c-b3aa-a0cef2112f52\" (UID: \"b9fab7ec-fcd3-454c-b3aa-a0cef2112f52\") "
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.798008    9261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-config-volume" (OuterVolumeSpecName: "config-volume") pod "b9fab7ec-fcd3-454c-b3aa-a0cef2112f52" (UID: "b9fab7ec-fcd3-454c-b3aa-a0cef2112f52"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.802234    9261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-kube-api-access-b484w" (OuterVolumeSpecName: "kube-api-access-b484w") pod "b9fab7ec-fcd3-454c-b3aa-a0cef2112f52" (UID: "b9fab7ec-fcd3-454c-b3aa-a0cef2112f52"). InnerVolumeSpecName "kube-api-access-b484w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.897435    9261 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-config-volume\") on node \"default-k8s-diff-port-336100\" DevicePath \"\""
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.897594    9261 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-b484w\" (UniqueName: \"kubernetes.io/projected/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-kube-api-access-b484w\") on node \"default-k8s-diff-port-336100\" DevicePath \"\""
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.974982    9261 scope.go:117] "RemoveContainer" containerID="0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"
	Feb 26 11:48:35 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:35.068867    9261 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jvkjt" podUID="1b16d8a8-d350-4790-a653-2eb996a85ab1"
	Feb 26 11:48:35 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:35.178776    9261 scope.go:117] "RemoveContainer" containerID="0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"
	Feb 26 11:48:35 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:35.184059    9261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed" containerID="0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"
	Feb 26 11:48:35 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:35.184236    9261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"} err="failed to get container status \"0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"
	Feb 26 11:48:36 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:36.111601    9261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b9fab7ec-fcd3-454c-b3aa-a0cef2112f52" path="/var/lib/kubelet/pods/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52/volumes"
	Feb 26 11:49:11 default-k8s-diff-port-336100 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Feb 26 11:49:11 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:49:11.677442    9261 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Feb 26 11:49:11 default-k8s-diff-port-336100 systemd[1]: kubelet.service: Deactivated successfully.
	Feb 26 11:49:11 default-k8s-diff-port-336100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [025f65d455b3] <==
	2024/02/26 11:48:55 Starting overwatch
	2024/02/26 11:48:55 Using namespace: kubernetes-dashboard
	2024/02/26 11:48:55 Using in-cluster config to connect to apiserver
	2024/02/26 11:48:55 Using secret token for csrf signing
	2024/02/26 11:48:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/02/26 11:48:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/02/26 11:48:55 Successful initial request to the apiserver, version: v1.28.4
	2024/02/26 11:48:55 Generating JWE encryption key
	2024/02/26 11:48:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/02/26 11:48:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/02/26 11:48:55 Initializing JWE encryption key from synchronized object
	2024/02/26 11:48:55 Creating in-cluster Sidecar client
	2024/02/26 11:48:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/02/26 11:48:55 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [9b58ef900d13] <==
	I0226 11:48:32.908707       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0226 11:48:32.984237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0226 11:48:32.984529       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0226 11:48:33.079549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0226 11:48:33.080165       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336100_a694480a-94d6-4240-bee3-2650b9fe7c16!
	I0226 11:48:33.080528       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc07d376-077f-4e10-ab25-6a4962b38d1c", APIVersion:"v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-336100_a694480a-94d6-4240-bee3-2650b9fe7c16 became leader
	I0226 11:48:33.181003       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336100_a694480a-94d6-4240-bee3-2650b9fe7c16!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:49:16.758875    6688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100: exit status 2 (1.4439527s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:49:31.819536    9716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "default-k8s-diff-port-336100" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-336100
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-336100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3",
	        "Created": "2024-02-26T11:41:04.097888482Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:42:43.702971111Z",
	            "FinishedAt": "2024-02-26T11:42:39.390055524Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3/hosts",
	        "LogPath": "/var/lib/docker/containers/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3/dc1783eb632319b2ab9ef31bb2d64f5657c6fe2881069f383d9e77e6251494b3-json.log",
	        "Name": "/default-k8s-diff-port-336100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-336100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-336100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d8308c5dc6904a38239a63158a4954fa491ba3056ddc8df973f6db0b743577f1-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8308c5dc6904a38239a63158a4954fa491ba3056ddc8df973f6db0b743577f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8308c5dc6904a38239a63158a4954fa491ba3056ddc8df973f6db0b743577f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8308c5dc6904a38239a63158a4954fa491ba3056ddc8df973f6db0b743577f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-336100",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-336100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-336100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-336100",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-336100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70c1d1bbcb3f4fe67c5335ef03c16ee4e11efdbbea343c7cbf60650885e572de",
	            "SandboxKey": "/var/run/docker/netns/70c1d1bbcb3f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54283"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54284"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54285"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54286"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54287"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-336100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dc1783eb6323",
	                        "default-k8s-diff-port-336100"
	                    ],
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "8ab5fb28410c8674c42cbee3241ecadd46423b5f521a9025f751cf381c550194",
	                    "EndpointID": "974d6b559c55e5f3145ac9765d41787096b52bf59c933d02cf33bda892d95185",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "default-k8s-diff-port-336100",
	                        "dc1783eb6323"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100: exit status 2 (1.4215816s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:49:33.442907    5228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-336100 logs -n 25
E0226 11:49:42.186926   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:49:42.201610   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:49:42.217001   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:49:42.247744   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:49:42.295571   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:49:42.389400   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:49:42.561061   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:49:42.889226   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-336100 logs -n 25: (12.8454176s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:48 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-336100  | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:42 UTC |
	|         | default-k8s-diff-port-336100                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-336100       | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:42 UTC | 26 Feb 24 11:48 UTC |
	|         | default-k8s-diff-port-336100                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-321200        | old-k8s-version-321200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| image   | no-preload-279800 image list                           | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-279800                                   | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p no-preload-279800                                   | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p no-preload-279800                                   | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	| delete  | -p no-preload-279800                                   | no-preload-279800            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	| start   | -p newest-cni-571300 --memory=2200 --alsologtostderr   | newest-cni-571300            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.29.0-rc.2      |                              |                   |         |                     |                     |
	| stop    | -p old-k8s-version-321200                              | old-k8s-version-321200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-321200             | old-k8s-version-321200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p old-k8s-version-321200                              | old-k8s-version-321200       | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                              |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |                   |         |                     |                     |
	|         | --keep-context=false                                   |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |                   |         |                     |                     |
	| image   | embed-certs-755000 image list                          | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:49 UTC |
	| delete  | -p embed-certs-755000                                  | embed-certs-755000           | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	| start   | -p auto-968100 --memory=3072                           | auto-968100                  | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	| image   | default-k8s-diff-port-336100                           | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-336100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-336100                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-571300             | newest-cni-571300            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p newest-cni-571300                                   | newest-cni-571300            | minikube7\jenkins | v1.32.0 | 26 Feb 24 11:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:49:06
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:49:06.729447      32 out.go:291] Setting OutFile to fd 1836 ...
	I0226 11:49:06.730447      32 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:49:06.730447      32 out.go:304] Setting ErrFile to fd 2036...
	I0226 11:49:06.730447      32 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:49:06.760426      32 out.go:298] Setting JSON to false
	I0226 11:49:06.764431      32 start.go:129] hostinfo: {"hostname":"minikube7","uptime":5423,"bootTime":1708942723,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 11:49:06.765425      32 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 11:49:06.769425      32 out.go:177] * [auto-968100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 11:49:06.775435      32 notify.go:220] Checking for updates...
	I0226 11:49:06.779433      32 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:49:06.784433      32 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:49:06.791422      32 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 11:49:06.798459      32 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:49:06.803441      32 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:49:06.809424      32 config.go:182] Loaded profile config "default-k8s-diff-port-336100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:49:06.809424      32 config.go:182] Loaded profile config "newest-cni-571300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 11:49:06.810432      32 config.go:182] Loaded profile config "old-k8s-version-321200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:49:06.810432      32 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:49:07.188434      32 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 11:49:07.204455      32 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:49:07.682799      32 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:49:07.638279587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:49:07.686767      32 out.go:177] * Using the docker driver based on user configuration
	I0226 11:49:07.692780      32 start.go:299] selected driver: docker
	I0226 11:49:07.692780      32 start.go:903] validating driver "docker" against <nil>
	I0226 11:49:07.692780      32 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:49:07.787802      32 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:49:08.255774      32 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:49:08.197369964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:49:08.255774      32 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:49:08.256770      32 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:49:08.260767      32 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 11:49:08.264770      32 cni.go:84] Creating CNI manager for ""
	I0226 11:49:08.264770      32 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 11:49:08.264770      32 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0226 11:49:08.264770      32 start_flags.go:323] config:
	{Name:auto-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-968100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:49:08.274820      32 out.go:177] * Starting control plane node auto-968100 in cluster auto-968100
	I0226 11:49:08.278788      32 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 11:49:08.282776      32 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:49:08.288800      32 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:49:08.288800      32 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:49:08.288800      32 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 11:49:08.288800      32 cache.go:56] Caching tarball of preloaded images
	I0226 11:49:08.288800      32 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 11:49:08.288800      32 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0226 11:49:08.289771      32 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\config.json ...
	I0226 11:49:08.289771      32 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\config.json: {Name:mk9988f5dd5d27595a7fddc854aab47e6accdd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:49:08.512788      32 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:49:08.512788      32 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:49:08.512788      32 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:49:08.512788      32 start.go:365] acquiring machines lock for auto-968100: {Name:mkc42eef878ef2f28860fc95f93b3960dadab276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:49:08.512788      32 start.go:369] acquired machines lock for "auto-968100" in 0s
	I0226 11:49:08.512788      32 start.go:93] Provisioning new machine with config: &{Name:auto-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-968100 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 11:49:08.512788      32 start.go:125] createHost starting for "" (driver="docker")
	I0226 11:49:09.993218    8212 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0226 11:49:09.993218    8212 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:49:09.994247    8212 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:49:09.994247    8212 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:49:09.994247    8212 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:49:09.994247    8212 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:49:10.000206    8212 out.go:204]   - Generating certificates and keys ...
	I0226 11:49:10.001216    8212 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:49:10.001216    8212 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:49:10.001216    8212 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 11:49:10.001216    8212 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 11:49:10.002211    8212 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 11:49:10.002211    8212 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 11:49:10.002211    8212 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 11:49:10.003228    8212 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-571300] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0226 11:49:10.003228    8212 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 11:49:10.003228    8212 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-571300] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0226 11:49:10.003228    8212 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 11:49:10.004227    8212 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 11:49:10.004227    8212 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 11:49:10.004227    8212 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:49:10.004227    8212 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:49:10.004227    8212 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0226 11:49:10.004227    8212 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:49:10.005210    8212 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:49:10.005210    8212 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:49:10.005210    8212 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:49:10.005210    8212 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:49:10.009212    8212 out.go:204]   - Booting up control plane ...
	I0226 11:49:10.009212    8212 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:49:10.009212    8212 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:49:10.009212    8212 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:49:10.010231    8212 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:49:10.010231    8212 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:49:10.010231    8212 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 11:49:10.010231    8212 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:49:10.011226    8212 kubeadm.go:322] [apiclient] All control plane components are healthy after 14.507596 seconds
	I0226 11:49:10.011226    8212 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0226 11:49:10.011226    8212 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0226 11:49:10.012225    8212 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0226 11:49:10.012225    8212 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-571300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0226 11:49:10.012225    8212 kubeadm.go:322] [bootstrap-token] Using token: k6p57a.7mk8dmm9zmh6wzrn
	I0226 11:49:10.019260    8212 out.go:204]   - Configuring RBAC rules ...
	I0226 11:49:10.019260    8212 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0226 11:49:10.019260    8212 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0226 11:49:10.020204    8212 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0226 11:49:10.020204    8212 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0226 11:49:10.020204    8212 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0226 11:49:10.020204    8212 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0226 11:49:10.021232    8212 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0226 11:49:10.021232    8212 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0226 11:49:10.021232    8212 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0226 11:49:10.021232    8212 kubeadm.go:322] 
	I0226 11:49:10.021232    8212 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0226 11:49:10.021232    8212 kubeadm.go:322] 
	I0226 11:49:10.021232    8212 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0226 11:49:10.022238    8212 kubeadm.go:322] 
	I0226 11:49:10.022238    8212 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0226 11:49:10.022238    8212 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0226 11:49:10.022238    8212 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0226 11:49:10.022238    8212 kubeadm.go:322] 
	I0226 11:49:10.022238    8212 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0226 11:49:10.022238    8212 kubeadm.go:322] 
	I0226 11:49:10.023232    8212 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0226 11:49:10.023232    8212 kubeadm.go:322] 
	I0226 11:49:10.023232    8212 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0226 11:49:10.023232    8212 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0226 11:49:10.023232    8212 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0226 11:49:10.023232    8212 kubeadm.go:322] 
	I0226 11:49:10.024216    8212 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0226 11:49:10.024216    8212 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0226 11:49:10.024216    8212 kubeadm.go:322] 
	I0226 11:49:10.024216    8212 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k6p57a.7mk8dmm9zmh6wzrn \
	I0226 11:49:10.025219    8212 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:692c5187086ebf6703a77455376cf1d08082795eb601b6b948c6cdfd3f4f8e8d \
	I0226 11:49:10.025219    8212 kubeadm.go:322] 	--control-plane 
	I0226 11:49:10.025219    8212 kubeadm.go:322] 
	I0226 11:49:10.025219    8212 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0226 11:49:10.025219    8212 kubeadm.go:322] 
	I0226 11:49:10.025219    8212 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k6p57a.7mk8dmm9zmh6wzrn \
	I0226 11:49:10.026308    8212 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:692c5187086ebf6703a77455376cf1d08082795eb601b6b948c6cdfd3f4f8e8d 
	I0226 11:49:10.026308    8212 cni.go:84] Creating CNI manager for ""
	I0226 11:49:10.026308    8212 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 11:49:10.032214    8212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0226 11:49:08.519791      32 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0226 11:49:08.519791      32 start.go:159] libmachine.API.Create for "auto-968100" (driver="docker")
	I0226 11:49:08.520773      32 client.go:168] LocalClient.Create starting
	I0226 11:49:08.521806      32 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0226 11:49:08.521806      32 main.go:141] libmachine: Decoding PEM data...
	I0226 11:49:08.521806      32 main.go:141] libmachine: Parsing certificate...
	I0226 11:49:08.521806      32 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0226 11:49:08.522781      32 main.go:141] libmachine: Decoding PEM data...
	I0226 11:49:08.522781      32 main.go:141] libmachine: Parsing certificate...
	I0226 11:49:08.542797      32 cli_runner.go:164] Run: docker network inspect auto-968100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 11:49:08.785771      32 cli_runner.go:211] docker network inspect auto-968100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 11:49:08.796777      32 network_create.go:281] running [docker network inspect auto-968100] to gather additional debugging logs...
	I0226 11:49:08.796777      32 cli_runner.go:164] Run: docker network inspect auto-968100
	W0226 11:49:09.024780      32 cli_runner.go:211] docker network inspect auto-968100 returned with exit code 1
	I0226 11:49:09.024780      32 network_create.go:284] error running [docker network inspect auto-968100]: docker network inspect auto-968100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-968100 not found
	I0226 11:49:09.024780      32 network_create.go:286] output of [docker network inspect auto-968100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-968100 not found
	
	** /stderr **
	I0226 11:49:09.038791      32 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:49:09.318810      32 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.350776      32 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.381773      32 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.428780      32 network.go:210] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.460807      32 network.go:207] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f31d0}
	I0226 11:49:09.460807      32 network_create.go:124] attempt to create docker network auto-968100 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0226 11:49:09.479825      32 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-968100 auto-968100
	W0226 11:49:09.738218      32 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-968100 auto-968100 returned with exit code 1
	W0226 11:49:09.738218      32 network_create.go:149] failed to create docker network auto-968100 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-968100 auto-968100: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0226 11:49:09.738218      32 network_create.go:116] failed to create docker network auto-968100 192.168.85.0/24, will retry: subnet is taken
	I0226 11:49:09.778227      32 network.go:210] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:49:09.811330      32 network.go:207] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00240c960}
	I0226 11:49:09.811330      32 network_create.go:124] attempt to create docker network auto-968100 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0226 11:49:09.822201      32 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-968100 auto-968100
	I0226 11:49:10.199249      32 network_create.go:108] docker network auto-968100 192.168.94.0/24 created
	I0226 11:49:10.199249      32 kic.go:121] calculated static IP "192.168.94.2" for the "auto-968100" container
	I0226 11:49:10.228241      32 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 11:49:10.486254      32 cli_runner.go:164] Run: docker volume create auto-968100 --label name.minikube.sigs.k8s.io=auto-968100 --label created_by.minikube.sigs.k8s.io=true
	I0226 11:49:10.736807      32 oci.go:103] Successfully created a docker volume auto-968100
	I0226 11:49:10.750820      32 cli_runner.go:164] Run: docker run --rm --name auto-968100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-968100 --entrypoint /usr/bin/test -v auto-968100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 11:49:10.054415    8212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0226 11:49:10.091218    8212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0226 11:49:10.268252    8212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 11:49:10.295248    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:10.296258    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6 minikube.k8s.io/name=newest-cni-571300 minikube.k8s.io/updated_at=2024_02_26T11_49_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:10.302249    8212 ops.go:34] apiserver oom_adj: -16
	I0226 11:49:10.838810    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:13.959181   10808 docker.go:649] Took 15.826555 seconds to copy over tarball
	I0226 11:49:13.972530   10808 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 11:49:14.109058    8212 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.2702258s)
	I0226 11:49:14.125471    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:14.826590    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:15.110237    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:15.619819    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:15.831172    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:16.347615    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:16.842398    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:17.344964    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:17.841388    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:18.882565   10808 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.9099043s)
	I0226 11:49:18.882626   10808 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 11:49:19.005355   10808 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 11:49:19.031851   10808 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0226 11:49:19.098086   10808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:49:19.253983   10808 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:49:18.331146    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:18.835955    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:19.331193    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:20.235513    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:23.656038    8212 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.4205016s)
	I0226 11:49:23.669918    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:24.048381    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:24.334676    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:24.844844    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:25.333880    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:25.839259    8212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:49:26.208761    8212 kubeadm.go:1088] duration metric: took 15.940399s to wait for elevateKubeSystemPrivileges.
	I0226 11:49:26.208761    8212 kubeadm.go:406] StartCluster complete in 40.2023789s
	I0226 11:49:26.208761    8212 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:49:26.208761    8212 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:49:26.210761    8212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:49:26.211765    8212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 11:49:26.211765    8212 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 11:49:26.212776    8212 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-571300"
	I0226 11:49:26.212776    8212 addons.go:69] Setting default-storageclass=true in profile "newest-cni-571300"
	I0226 11:49:26.212776    8212 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-571300"
	I0226 11:49:26.212776    8212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-571300"
	I0226 11:49:26.212776    8212 host.go:66] Checking if "newest-cni-571300" exists ...
	I0226 11:49:26.212776    8212 config.go:182] Loaded profile config "newest-cni-571300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 11:49:26.238786    8212 cli_runner.go:164] Run: docker container inspect newest-cni-571300 --format={{.State.Status}}
	I0226 11:49:26.247787    8212 cli_runner.go:164] Run: docker container inspect newest-cni-571300 --format={{.State.Status}}
	W0226 11:49:26.379776    8212 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "newest-cni-571300" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0226 11:49:26.379776    8212 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0226 11:49:26.380772    8212 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 11:49:26.383798    8212 out.go:177] * Verifying Kubernetes components...
	I0226 11:49:26.406808    8212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:49:26.491852    8212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:49:25.612020      32 cli_runner.go:217] Completed: docker run --rm --name auto-968100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-968100 --entrypoint /usr/bin/test -v auto-968100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (14.8610967s)
	I0226 11:49:25.612020      32 oci.go:107] Successfully prepared a docker volume auto-968100
	I0226 11:49:25.612567      32 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:49:25.612567      32 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 11:49:25.624511      32 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-968100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 11:49:26.240784   10808 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.9867532s)
	I0226 11:49:26.259769   10808 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:49:26.316774   10808 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 11:49:26.316774   10808 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 11:49:26.316774   10808 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 11:49:26.334769   10808 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0226 11:49:26.334769   10808 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:49:26.341777   10808 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:49:26.345785   10808 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0226 11:49:26.346777   10808 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:49:26.350774   10808 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:49:26.352820   10808 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:49:26.355793   10808 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:26.356777   10808 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:49:26.356777   10808 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0226 11:49:26.370771   10808 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:49:26.375786   10808 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:49:26.375786   10808 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:49:26.375786   10808 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0226 11:49:26.384789   10808 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:26.391786   10808 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	W0226 11:49:26.477938   10808 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:49:26.586859   10808 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:49:26.699860   10808 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:26.780877   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0226 11:49:26.823540   10808 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0226 11:49:26.931718   10808 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:27.016651   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	W0226 11:49:27.040340   10808 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:27.040340   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0226 11:49:27.071330   10808 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0226 11:49:27.071330   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0226 11:49:27.071330   10808 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:49:27.082335   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:49:27.087335   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0226 11:49:27.088346   10808 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0226 11:49:27.088346   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0226 11:49:27.088346   10808 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0226 11:49:27.102349   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0226 11:49:27.137366   10808 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0226 11:49:27.137366   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0226 11:49:27.137366   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0226 11:49:27.137366   10808 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 11:49:27.152331   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	W0226 11:49:27.168361   10808 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:27.171352   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0226 11:49:27.183437   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0226 11:49:26.495866    8212 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 11:49:26.495866    8212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 11:49:26.498876    8212 addons.go:234] Setting addon default-storageclass=true in "newest-cni-571300"
	I0226 11:49:26.498876    8212 host.go:66] Checking if "newest-cni-571300" exists ...
	I0226 11:49:26.510857    8212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-571300
	I0226 11:49:26.525848    8212 cli_runner.go:164] Run: docker container inspect newest-cni-571300 --format={{.State.Status}}
	I0226 11:49:26.566921    8212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0226 11:49:26.584869    8212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-571300
	I0226 11:49:26.732847    8212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54445 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-571300\id_rsa Username:docker}
	I0226 11:49:26.747866    8212 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 11:49:26.747866    8212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 11:49:26.759849    8212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-571300
	I0226 11:49:26.802881    8212 api_server.go:52] waiting for apiserver process to appear ...
	I0226 11:49:26.815854    8212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:49:26.967725    8212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54445 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-571300\id_rsa Username:docker}
	I0226 11:49:27.291347    8212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 11:49:27.303343    8212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 11:49:27.770396    8212 api_server.go:72] duration metric: took 1.3896136s to wait for apiserver process to appear ...
	I0226 11:49:27.770396    8212 api_server.go:88] waiting for apiserver healthz status ...
	I0226 11:49:27.770396    8212 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54444/healthz ...
	I0226 11:49:27.770396    8212 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.2034664s)
	I0226 11:49:27.771083    8212 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0226 11:49:27.791230    8212 api_server.go:279] https://127.0.0.1:54444/healthz returned 200:
	ok
	I0226 11:49:27.800792    8212 api_server.go:141] control plane version: v1.29.0-rc.2
	I0226 11:49:27.800879    8212 api_server.go:131] duration metric: took 30.4828ms to wait for apiserver health ...
	I0226 11:49:27.800902    8212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 11:49:27.884661    8212 system_pods.go:59] 7 kube-system pods found
	I0226 11:49:27.885451    8212 system_pods.go:61] "coredns-76f75df574-lqvqv" [424febef-c656-49c2-ac5a-ac255a127b16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0226 11:49:27.885523    8212 system_pods.go:61] "coredns-76f75df574-zhmpn" [15aedae3-e333-47f6-808c-bb92f23654cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0226 11:49:27.885523    8212 system_pods.go:61] "etcd-newest-cni-571300" [b26b1f0d-fce7-4628-b1dd-899ecc8b6763] Running
	I0226 11:49:27.885523    8212 system_pods.go:61] "kube-apiserver-newest-cni-571300" [438304e0-b75d-492a-a2e0-b0e7f90e8840] Running
	I0226 11:49:27.885523    8212 system_pods.go:61] "kube-controller-manager-newest-cni-571300" [8a35589e-787b-42a6-b14b-4574908c3785] Running
	I0226 11:49:27.885523    8212 system_pods.go:61] "kube-proxy-jmcln" [89ac662b-8ecf-4e32-8f2e-357b4e3e2852] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0226 11:49:27.885601    8212 system_pods.go:61] "kube-scheduler-newest-cni-571300" [f8f6cdef-522c-4130-9e7c-4e7ce73c0cf7] Running
	I0226 11:49:27.885601    8212 system_pods.go:74] duration metric: took 84.698ms to wait for pod list to return data ...
	I0226 11:49:27.885601    8212 default_sa.go:34] waiting for default service account to be created ...
	I0226 11:49:27.894572    8212 default_sa.go:45] found service account: "default"
	I0226 11:49:27.894572    8212 default_sa.go:55] duration metric: took 8.9714ms for default service account to be created ...
	I0226 11:49:27.894661    8212 kubeadm.go:581] duration metric: took 1.5138782s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0226 11:49:27.894699    8212 node_conditions.go:102] verifying NodePressure condition ...
	I0226 11:49:27.973974    8212 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0226 11:49:27.974045    8212 node_conditions.go:123] node cpu capacity is 16
	I0226 11:49:27.974128    8212 node_conditions.go:105] duration metric: took 79.4284ms to run NodePressure ...
	I0226 11:49:27.974128    8212 start.go:228] waiting for startup goroutines ...
	I0226 11:49:30.077505    8212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.7861384s)
	I0226 11:49:30.077505    8212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.7741421s)
	I0226 11:49:30.342181    8212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0226 11:49:30.346026    8212 addons.go:505] enable addons completed in 4.1342327s: enabled=[storage-provisioner default-storageclass]
	I0226 11:49:30.346026    8212 start.go:233] waiting for cluster config update ...
	I0226 11:49:30.346026    8212 start.go:242] writing updated cluster config ...
	I0226 11:49:30.362020    8212 ssh_runner.go:195] Run: rm -f paused
	I0226 11:49:30.530200    8212 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0226 11:49:30.534615    8212 out.go:177] * Done! kubectl is now configured to use "newest-cni-571300" cluster and "default" namespace by default
	I0226 11:49:27.207331   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0226 11:49:27.227343   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0226 11:49:27.229344   10808 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0226 11:49:27.229344   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0226 11:49:27.229344   10808 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0226 11:49:27.238351   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	W0226 11:49:27.277383   10808 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0226 11:49:27.279358   10808 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0226 11:49:27.279358   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0226 11:49:27.279358   10808 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0226 11:49:27.287365   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0226 11:49:27.297358   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0226 11:49:27.337663   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0226 11:49:27.387729   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:27.439695   10808 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0226 11:49:27.439695   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0226 11:49:27.439695   10808 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:27.451656   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 11:49:27.497822   10808 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:49:27.509824   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0226 11:49:27.547095   10808 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0226 11:49:27.547191   10808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0226 11:49:27.547191   10808 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:49:27.559142   10808 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0226 11:49:27.605204   10808 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0226 11:49:27.605744   10808 cache_images.go:92] LoadImages completed in 1.2889611s
	W0226 11:49:27.605904   10808 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0: The system cannot find the file specified.
	I0226 11:49:27.617726   10808 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 11:49:27.740360   10808 cni.go:84] Creating CNI manager for ""
	I0226 11:49:27.740545   10808 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 11:49:27.740545   10808 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:49:27.740545   10808 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-321200 NodeName:old-k8s-version-321200 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 11:49:27.741026   10808 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-321200"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-321200
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:49:27.741182   10808 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-321200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:49:27.756833   10808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0226 11:49:27.788093   10808 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:49:27.810138   10808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:49:27.827591   10808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0226 11:49:27.878736   10808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 11:49:27.923417   10808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0226 11:49:27.983829   10808 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:49:28.001749   10808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:49:28.027125   10808 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200 for IP: 192.168.85.2
	I0226 11:49:28.027125   10808 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:49:28.028286   10808 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 11:49:28.028583   10808 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0226 11:49:28.029411   10808 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\client.key
	I0226 11:49:28.029568   10808 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key.43b9df8c
	I0226 11:49:28.030200   10808 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.key
	I0226 11:49:28.032221   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 11:49:28.032561   10808 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 11:49:28.032716   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 11:49:28.033076   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 11:49:28.033403   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 11:49:28.033778   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 11:49:28.034119   10808 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 11:49:28.035509   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:49:28.096154   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 11:49:28.138515   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:49:28.193749   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-321200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 11:49:28.235981   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:49:28.281733   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:49:28.334907   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:49:28.383693   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:49:28.434967   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 11:49:28.481334   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:49:28.531716   10808 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 11:49:28.579405   10808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:49:28.629660   10808 ssh_runner.go:195] Run: openssl version
	I0226 11:49:28.657958   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:49:28.700636   10808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:49:28.712851   10808 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:49:28.730793   10808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:49:28.761261   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:49:28.803910   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 11:49:28.835047   10808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 11:49:28.849201   10808 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 11:49:28.868679   10808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 11:49:28.912878   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 11:49:28.945100   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 11:49:28.986231   10808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 11:49:28.999235   10808 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 11:49:29.011225   10808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 11:49:29.042612   10808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 11:49:29.085409   10808 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:49:29.111788   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 11:49:29.139514   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 11:49:29.186580   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 11:49:29.214568   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 11:49:29.247784   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 11:49:29.283273   10808 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 11:49:29.302458   10808 kubeadm.go:404] StartCluster: {Name:old-k8s-version-321200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-321200 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:49:29.314447   10808 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 11:49:29.369711   10808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:49:29.394689   10808 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 11:49:29.394689   10808 kubeadm.go:636] restartCluster start
	I0226 11:49:29.406665   10808 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 11:49:29.436804   10808 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:29.447458   10808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-321200
	I0226 11:49:29.613505   10808 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-321200" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:49:29.614511   10808 kubeconfig.go:146] "old-k8s-version-321200" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0226 11:49:29.615519   10808 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:49:29.643510   10808 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 11:49:29.662359   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:29.684123   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:29.707405   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:30.175052   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:30.189559   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:30.211752   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:30.676011   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:30.697044   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:30.723029   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:31.177297   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:31.195270   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:31.214271   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:31.663170   10808 api_server.go:166] Checking apiserver status ...
	I0226 11:49:31.682175   10808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 11:49:31.717207   10808 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 11:49:32.168316   10808 api_server.go:166] Checking apiserver status ...
	
	
	==> Docker <==
	Feb 26 11:48:32 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:32.797698274Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Feb 26 11:48:32 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:32.859795883Z" level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Feb 26 11:48:33 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/45dc9bcb6abb9a7631143a8207ba9aaf2a67676ab13fa9bc6d7d963bda776b58/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 26 11:48:33 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9bac999fd7d6b3d7cce54b3a8f45722a964880064fb7d428118c71bd57f04be/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 26 11:48:34 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:34.099987675Z" level=info msg="ignoring event" container=0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:48:34 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:34.168056621Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=cbf5992a9096f203 traceID=ed61dcc0c019266d096562dadbad75b2
	Feb 26 11:48:34 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:34.581566127Z" level=info msg="ignoring event" container=c31578b82eeb3bfb4a9d89f7618baefb552a2a3931a9144982093d1b10bc5fac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:48:44 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:44Z" level=info msg="Pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: ee3247c7e545: Extracting [======================>                            ]  34.54MB/75.78MB"
	Feb 26 11:48:54 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:48:54Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Feb 26 11:48:54 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:54.382708386Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=a6a2c44b9ea81091 traceID=16079db72e4ac8c8935161d6ef04f995
	Feb 26 11:48:55 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:55.268174387Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Feb 26 11:48:55 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:48:55.268450990Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest version 2, schema 1 support will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format, or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Feb 26 11:49:05 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:49:05Z" level=info msg="Pulling image registry.k8s.io/echoserver:1.4: 412c0feed608: Extracting [>                                                  ]  294.9kB/27.02MB"
	Feb 26 11:49:12 default-k8s-diff-port-336100 dockerd[983]: time="2024-02-26T11:49:12.444392157Z" level=error msg="Handler for POST /v1.44/containers/36f9099cc403/pause returned error: cannot pause container 36f9099cc403060dbf46272d2479ef74b611d60766f7a0da5b42e3ad41e0153d: OCI runtime pause failed: unable to freeze: unknown"
	Feb 26 11:49:15 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:49:15Z" level=info msg="Pulling image registry.k8s.io/echoserver:1.4: d3c51dabc842: Extracting [==================================================>]     172B/172B"
	Feb 26 11:49:18 default-k8s-diff-port-336100 cri-dockerd[1222]: time="2024-02-26T11:49:18Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: Status: Downloaded newer image for registry.k8s.io/echoserver:1.4"
	Feb 26 11:49:18 default-k8s-diff-port-336100 cri-dockerd[1222]: W0226 11:49:18.573716    1222 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 26 11:49:20 default-k8s-diff-port-336100 dockerd[983]: 2024/02/26 11:49:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Feb 26 11:49:30 default-k8s-diff-port-336100 dockerd[983]: 2024/02/26 11:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Feb 26 11:49:30 default-k8s-diff-port-336100 dockerd[983]: 2024/02/26 11:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Feb 26 11:49:31 default-k8s-diff-port-336100 dockerd[983]: 2024/02/26 11:49:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Feb 26 11:49:31 default-k8s-diff-port-336100 dockerd[983]: 2024/02/26 11:49:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Feb 26 11:49:31 default-k8s-diff-port-336100 dockerd[983]: 2024/02/26 11:49:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Feb 26 11:49:31 default-k8s-diff-port-336100 dockerd[983]: 2024/02/26 11:49:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Feb 26 11:49:31 default-k8s-diff-port-336100 dockerd[983]: 2024/02/26 11:49:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	025f65d455b3f       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   42 seconds ago       Running             kubernetes-dashboard      0                   45dc9bcb6abb9       kubernetes-dashboard-8694d4445c-ws57n
	9b58ef900d133       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner       0                   a350137fd8cc6       storage-provisioner
	128ab6b6902de       ead0a4a53df89                                                                                    About a minute ago   Running             coredns                   0                   fb6ee37155082       coredns-5dd5756b68-6dzcq
	e3796456c31fc       83f6cc407eed8                                                                                    About a minute ago   Running             kube-proxy                0                   10f60aa72eef7       kube-proxy-wn86k
	f88a17167e777       d058aa5ab969c                                                                                    About a minute ago   Running             kube-controller-manager   0                   40c261516b8ac       kube-controller-manager-default-k8s-diff-port-336100
	36f9099cc4030       73deb9a3f7025                                                                                    About a minute ago   Running             etcd                      0                   4bdbf2ecbaa92       etcd-default-k8s-diff-port-336100
	f30be25967718       e3db313c6dbc0                                                                                    About a minute ago   Running             kube-scheduler            0                   85ea568887df4       kube-scheduler-default-k8s-diff-port-336100
	71f8ddd2d1ceb       7fe0e6f37db33                                                                                    About a minute ago   Running             kube-apiserver            0                   1f13081873a70       kube-apiserver-default-k8s-diff-port-336100
	
	
	==> coredns [128ab6b6902d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39021 - 46909 "HINFO IN 9218697660716973466.1102664807043899096. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.179612512s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[Feb26 11:48] hrtimer: interrupt took 2869333 ns
	
	
	==> etcd [36f9099cc403] <==
	{"level":"warn","ts":"2024-02-26T11:48:47.272509Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-26T11:48:46.591718Z","time spent":"680.786028ms","remote":"127.0.0.1:35494","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-02-26T11:48:47.272704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.201082ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-jvkjt.17b767410b462ca7\" ","response":"range_response_count:1 size:839"}
	{"level":"info","ts":"2024-02-26T11:48:47.272837Z","caller":"traceutil/trace.go:171","msg":"trace[1960303670] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-jvkjt.17b767410b462ca7; range_end:; response_count:1; response_revision:557; }","duration":"179.330485ms","start":"2024-02-26T11:48:47.093426Z","end":"2024-02-26T11:48:47.272757Z","steps":["trace[1960303670] 'agreement among raft nodes before linearized reading'  (duration: 179.149281ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:48:47.533626Z","caller":"traceutil/trace.go:171","msg":"trace[1772795827] linearizableReadLoop","detail":"{readStateIndex:577; appliedIndex:576; }","duration":"259.114157ms","start":"2024-02-26T11:48:47.274484Z","end":"2024-02-26T11:48:47.533598Z","steps":["trace[1772795827] 'read index received'  (duration: 255.889286ms)","trace[1772795827] 'applied index is now lower than readState.Index'  (duration: 3.223771ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-26T11:48:47.533825Z","caller":"traceutil/trace.go:171","msg":"trace[1484794696] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"259.632269ms","start":"2024-02-26T11:48:47.274178Z","end":"2024-02-26T11:48:47.533811Z","steps":["trace[1484794696] 'process raft request'  (duration: 256.180992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:48:47.534001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.679671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-26T11:48:47.534058Z","caller":"traceutil/trace.go:171","msg":"trace[736565086] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:558; }","duration":"259.751572ms","start":"2024-02-26T11:48:47.274291Z","end":"2024-02-26T11:48:47.534043Z","steps":["trace[736565086] 'agreement among raft nodes before linearized reading'  (duration: 259.557668ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:48:47.568252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.261817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"warn","ts":"2024-02-26T11:48:47.568293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.03849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:2 size:7558"}
	{"level":"info","ts":"2024-02-26T11:48:47.568319Z","caller":"traceutil/trace.go:171","msg":"trace[1078504284] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:559; }","duration":"203.335719ms","start":"2024-02-26T11:48:47.364967Z","end":"2024-02-26T11:48:47.568303Z","steps":["trace[1078504284] 'agreement among raft nodes before linearized reading'  (duration: 203.105713ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:48:47.56835Z","caller":"traceutil/trace.go:171","msg":"trace[888186011] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:2; response_revision:559; }","duration":"193.100991ms","start":"2024-02-26T11:48:47.375234Z","end":"2024-02-26T11:48:47.568335Z","steps":["trace[888186011] 'agreement among raft nodes before linearized reading'  (duration: 192.927487ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:48:47.568315Z","caller":"traceutil/trace.go:171","msg":"trace[1118518390] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"292.525ms","start":"2024-02-26T11:48:47.275771Z","end":"2024-02-26T11:48:47.568296Z","steps":["trace[1118518390] 'process raft request'  (duration: 292.028689ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:48:47.708192Z","caller":"traceutil/trace.go:171","msg":"trace[1122958620] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"130.748806ms","start":"2024-02-26T11:48:47.577342Z","end":"2024-02-26T11:48:47.708091Z","steps":["trace[1122958620] 'process raft request'  (duration: 61.485567ms)","trace[1122958620] 'compare'  (duration: 68.85073ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-26T11:48:53.921958Z","caller":"traceutil/trace.go:171","msg":"trace[351637160] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"145.11184ms","start":"2024-02-26T11:48:53.776818Z","end":"2024-02-26T11:48:53.92193Z","steps":["trace[351637160] 'process raft request'  (duration: 144.926438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:48:54.893724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"525.333778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:2 size:7558"}
	{"level":"info","ts":"2024-02-26T11:48:54.893961Z","caller":"traceutil/trace.go:171","msg":"trace[866541400] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:2; response_revision:566; }","duration":"525.577681ms","start":"2024-02-26T11:48:54.368361Z","end":"2024-02-26T11:48:54.893939Z","steps":["trace[866541400] 'range keys from in-memory index tree'  (duration: 525.135875ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:48:54.894184Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-26T11:48:54.368339Z","time spent":"525.724082ms","remote":"127.0.0.1:35666","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":2,"response size":7581,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	WARNING: 2024/02/26 11:49:11 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-02-26T11:49:12.565552Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13873776142822305911,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-02-26T11:49:12.99027Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.454352514s","expected-duration":"1s"}
	{"level":"warn","ts":"2024-02-26T11:49:13.656769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.412586ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873776142822305913 > lease_revoke:<id:40898de53e367025>","response":"size:28"}
	{"level":"info","ts":"2024-02-26T11:49:13.656978Z","caller":"traceutil/trace.go:171","msg":"trace[406868893] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:610; }","duration":"1.591706658s","start":"2024-02-26T11:49:12.065259Z","end":"2024-02-26T11:49:13.656965Z","steps":["trace[406868893] 'read index received'  (duration: 925.987974ms)","trace[406868893] 'applied index is now lower than readState.Index'  (duration: 665.716784ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-26T11:49:13.657043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.59180626s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-26T11:49:13.657064Z","caller":"traceutil/trace.go:171","msg":"trace[949421801] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:588; }","duration":"1.591836961s","start":"2024-02-26T11:49:12.065219Z","end":"2024-02-26T11:49:13.657056Z","steps":["trace[949421801] 'agreement among raft nodes before linearized reading'  (duration: 1.59178346s)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:49:13.657085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-26T11:49:12.065204Z","time spent":"1.591874861s","remote":"127.0.0.1:35606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":28,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	
	
	==> kernel <==
	 11:49:47 up  1:30,  0 users,  load average: 9.61, 6.54, 5.04
	Linux default-k8s-diff-port-336100 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [71f8ddd2d1ce] <==
	E0226 11:48:30.567421       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W0226 11:48:30.599779       1 handler_proxy.go:93] no RequestInfo found in the context
	E0226 11:48:30.599927       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0226 11:48:30.599944       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0226 11:48:30.600112       1 handler_proxy.go:93] no RequestInfo found in the context
	E0226 11:48:30.600405       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0226 11:48:30.601143       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0226 11:48:32.905152       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.208.152"}
	I0226 11:48:33.106266       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.227.124"}
	I0226 11:48:35.520402       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0226 11:48:35.521776       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0226 11:48:47.535469       1 trace.go:236] Trace[1370258729]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.103.2,type:*v1.Endpoints,resource:apiServerIPInfo (26-Feb-2024 11:48:46.589) (total time: 946ms):
	Trace[1370258729]: ---"Transaction prepared" 682ms (11:48:47.273)
	Trace[1370258729]: ---"Txn call completed" 261ms (11:48:47.535)
	Trace[1370258729]: [946.150625ms] [946.150625ms] END
	I0226 11:48:54.896780       1 trace.go:236] Trace[1367126895]: "List" accept:application/json, */*,audit-id:31084a52-ce21-4c65-9f75-6779e9d521ec,client:192.168.103.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (26-Feb-2024 11:48:54.367) (total time: 529ms):
	Trace[1367126895]: ["List(recursive=true) etcd3" audit-id:31084a52-ce21-4c65-9f75-6779e9d521ec,key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: 529ms (11:48:54.367)]
	Trace[1367126895]: [529.584523ms] [529.584523ms] END
	I0226 11:49:00.887237       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0226 11:49:11.785616       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 109.802µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0226 11:49:11.785682       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0226 11:49:11.787224       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0226 11:49:11.787390       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0226 11:49:11.788706       1 timeout.go:142] post-timeout activity - time-elapsed: 3.131564ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-diff-port-336100" result: <nil>
	
	
	==> kube-controller-manager [f88a17167e77] <==
	E0226 11:48:32.000588       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0226 11:48:32.000596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.337655ms"
	E0226 11:48:32.000836       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" failed with pods "dashboard-metrics-scraper-5f989dc9cf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0226 11:48:32.001383       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0226 11:48:32.001467       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f989dc9cf-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0226 11:48:32.190350       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-ws57n"
	I0226 11:48:32.206597       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-7z4nr"
	I0226 11:48:32.268235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="185.94758ms"
	I0226 11:48:32.268235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.404974ms"
	I0226 11:48:32.484297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="215.845298ms"
	I0226 11:48:32.575623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="306.837208ms"
	I0226 11:48:32.768970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="284.419393ms"
	I0226 11:48:32.769118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="88.904µs"
	I0226 11:48:32.770676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="194.890044ms"
	I0226 11:48:32.770902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.203µs"
	I0226 11:48:33.995091       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="81.702µs"
	I0226 11:48:34.789541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.301µs"
	I0226 11:48:35.068532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.401µs"
	I0226 11:48:35.191474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.401µs"
	I0226 11:48:35.227037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.102µs"
	I0226 11:48:35.321738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="70.1µs"
	E0226 11:48:51.689475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0226 11:48:52.093768       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0226 11:48:56.632312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="33.351454ms"
	I0226 11:48:56.632591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.901µs"
	
	
	==> kube-proxy [e3796456c31f] <==
	I0226 11:48:26.904784       1 server_others.go:69] "Using iptables proxy"
	I0226 11:48:26.972827       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0226 11:48:27.092371       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0226 11:48:27.097953       1 server_others.go:152] "Using iptables Proxier"
	I0226 11:48:27.098009       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0226 11:48:27.098033       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0226 11:48:27.098078       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0226 11:48:27.098885       1 server.go:846] "Version info" version="v1.28.4"
	I0226 11:48:27.098901       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 11:48:27.101265       1 config.go:188] "Starting service config controller"
	I0226 11:48:27.101311       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0226 11:48:27.101349       1 config.go:97] "Starting endpoint slice config controller"
	I0226 11:48:27.101365       1 config.go:315] "Starting node config controller"
	I0226 11:48:27.101376       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0226 11:48:27.101376       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0226 11:48:27.266955       1 shared_informer.go:318] Caches are synced for node config
	I0226 11:48:27.267010       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0226 11:48:27.267019       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [f30be2596771] <==
	W0226 11:48:02.475804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0226 11:48:02.475952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0226 11:48:02.538826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0226 11:48:02.538929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0226 11:48:02.542074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0226 11:48:02.542191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0226 11:48:02.615027       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0226 11:48:02.615080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0226 11:48:02.634637       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0226 11:48:02.634760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0226 11:48:02.658717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0226 11:48:02.658825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0226 11:48:02.720005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0226 11:48:02.720140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0226 11:48:02.795056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0226 11:48:02.795106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0226 11:48:02.813397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0226 11:48:02.813519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0226 11:48:02.869045       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0226 11:48:02.869244       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0226 11:48:02.869978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0226 11:48:02.870010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0226 11:48:05.771847       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0226 11:48:05.771898       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0226 11:48:11.079466       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:32.373744    9261 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed641ac2dd84f690d35417e0779a14fc89bb7d4440bd9f5fe5cfaf01da951880"
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:32.867825    9261 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:32.868255    9261 kuberuntime_image.go:53] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:32.869647    9261 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gpddq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-jvkjt_kube-system(1b16d8a8-d350-4790-a653-2eb996a85ab1): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Feb 26 11:48:32 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:32.869815    9261 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-jvkjt" podUID="1b16d8a8-d350-4790-a653-2eb996a85ab1"
	Feb 26 11:48:33 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:33.869226    9261 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9bac999fd7d6b3d7cce54b3a8f45722a964880064fb7d428118c71bd57f04be"
	Feb 26 11:48:33 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:33.903770    9261 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45dc9bcb6abb9a7631143a8207ba9aaf2a67676ab13fa9bc6d7d963bda776b58"
	Feb 26 11:48:33 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:33.967570    9261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.967502671 podCreationTimestamp="2024-02-26 11:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-26 11:48:33.967364368 +0000 UTC m=+24.383499206" watchObservedRunningTime="2024-02-26 11:48:33.967502671 +0000 UTC m=+24.383637409"
	Feb 26 11:48:33 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:33.975424    9261 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jvkjt" podUID="1b16d8a8-d350-4790-a653-2eb996a85ab1"
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.796682    9261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b484w\" (UniqueName: \"kubernetes.io/projected/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-kube-api-access-b484w\") pod \"b9fab7ec-fcd3-454c-b3aa-a0cef2112f52\" (UID: \"b9fab7ec-fcd3-454c-b3aa-a0cef2112f52\") "
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.796897    9261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-config-volume\") pod \"b9fab7ec-fcd3-454c-b3aa-a0cef2112f52\" (UID: \"b9fab7ec-fcd3-454c-b3aa-a0cef2112f52\") "
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.798008    9261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-config-volume" (OuterVolumeSpecName: "config-volume") pod "b9fab7ec-fcd3-454c-b3aa-a0cef2112f52" (UID: "b9fab7ec-fcd3-454c-b3aa-a0cef2112f52"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.802234    9261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-kube-api-access-b484w" (OuterVolumeSpecName: "kube-api-access-b484w") pod "b9fab7ec-fcd3-454c-b3aa-a0cef2112f52" (UID: "b9fab7ec-fcd3-454c-b3aa-a0cef2112f52"). InnerVolumeSpecName "kube-api-access-b484w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.897435    9261 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-config-volume\") on node \"default-k8s-diff-port-336100\" DevicePath \"\""
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.897594    9261 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-b484w\" (UniqueName: \"kubernetes.io/projected/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52-kube-api-access-b484w\") on node \"default-k8s-diff-port-336100\" DevicePath \"\""
	Feb 26 11:48:34 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:34.974982    9261 scope.go:117] "RemoveContainer" containerID="0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"
	Feb 26 11:48:35 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:35.068867    9261 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jvkjt" podUID="1b16d8a8-d350-4790-a653-2eb996a85ab1"
	Feb 26 11:48:35 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:35.178776    9261 scope.go:117] "RemoveContainer" containerID="0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"
	Feb 26 11:48:35 default-k8s-diff-port-336100 kubelet[9261]: E0226 11:48:35.184059    9261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed" containerID="0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"
	Feb 26 11:48:35 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:35.184236    9261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"} err="failed to get container status \"0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0a114dc66150579971de16baf519857a34d0c0030fd86856a6890edd0ac333ed"
	Feb 26 11:48:36 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:48:36.111601    9261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b9fab7ec-fcd3-454c-b3aa-a0cef2112f52" path="/var/lib/kubelet/pods/b9fab7ec-fcd3-454c-b3aa-a0cef2112f52/volumes"
	Feb 26 11:49:11 default-k8s-diff-port-336100 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Feb 26 11:49:11 default-k8s-diff-port-336100 kubelet[9261]: I0226 11:49:11.677442    9261 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Feb 26 11:49:11 default-k8s-diff-port-336100 systemd[1]: kubelet.service: Deactivated successfully.
	Feb 26 11:49:11 default-k8s-diff-port-336100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [025f65d455b3] <==
	2024/02/26 11:48:55 Starting overwatch
	2024/02/26 11:48:55 Using namespace: kubernetes-dashboard
	2024/02/26 11:48:55 Using in-cluster config to connect to apiserver
	2024/02/26 11:48:55 Using secret token for csrf signing
	2024/02/26 11:48:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/02/26 11:48:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/02/26 11:48:55 Successful initial request to the apiserver, version: v1.28.4
	2024/02/26 11:48:55 Generating JWE encryption key
	2024/02/26 11:48:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/02/26 11:48:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/02/26 11:48:55 Initializing JWE encryption key from synchronized object
	2024/02/26 11:48:55 Creating in-cluster Sidecar client
	2024/02/26 11:48:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/02/26 11:48:55 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [9b58ef900d13] <==
	I0226 11:48:32.908707       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0226 11:48:32.984237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0226 11:48:32.984529       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0226 11:48:33.079549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0226 11:48:33.080165       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336100_a694480a-94d6-4240-bee3-2650b9fe7c16!
	I0226 11:48:33.080528       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc07d376-077f-4e10-ab25-6a4962b38d1c", APIVersion:"v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-336100_a694480a-94d6-4240-bee3-2650b9fe7c16 became leader
	I0226 11:48:33.181003       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336100_a694480a-94d6-4240-bee3-2650b9fe7c16!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:49:34.855309   13960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100: exit status 2 (1.515263s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:49:48.021322    7576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "default-k8s-diff-port-336100" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (39.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (324.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0226 12:02:13.459665   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
E0226 12:02:17.445323   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:02:29.351922   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:29.366898   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:29.382125   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:29.413553   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:02:29.460154   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:29.555367   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:29.730221   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:30.064262   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:30.716246   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:32.006205   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:34.579438   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:38.943447   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:02:39.706756   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:41.490110   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:41.505342   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:41.520536   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:41.552356   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:41.599723   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:41.693254   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:41.868113   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:42.201534   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:42.856173   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:44.143206   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:02:46.707842   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:02:49.951138   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:02:51.830783   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:03:02.084152   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:03:10.436297   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:03:13.813854   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:13.829379   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:13.844542   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:13.875846   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:13.922515   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:14.017048   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:14.191325   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:14.512469   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:15.162581   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:16.453253   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:19.027453   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:03:22.569007   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:03:24.161483   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:03:34.415566   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:03:51.398127   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
E0226 12:03:54.910708   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:03:55.867727   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:04:03.531415   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:04:12.776706   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-968100\client.crt: The system cannot find the path specified.
E0226 12:04:16.636400   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:04:35.879668   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
E0226 12:04:40.581866   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:04:42.189048   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 12:04:44.453803   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:04:54.967507   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:05:13.326798   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:05:22.784967   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-968100\client.crt: The system cannot find the path specified.
E0226 12:05:25.464003   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-968100\client.crt: The system cannot find the path specified.
E0226 12:05:27.519816   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:27.534777   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:27.551037   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:27.582296   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:27.629799   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:27.723640   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:27.883991   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:28.216262   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:28.868981   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:30.159136   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:05:32.725435   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
E0226 12:05:37.857138   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:05:48.109245   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:05:53.187141   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 12:05:57.811161   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:06:05.362325   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 12:06:08.598632   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:06:45.623188   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 12:06:49.478897   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 12:06:49.572883   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 12:07:13.460330   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54519/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-321200 -n old-k8s-version-321200
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 2 (1.228729s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 12:07:28.225321    5096 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-321200" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-321200
E0226 12:07:29.351293   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-968100\client.crt: The system cannot find the path specified.
helpers_test.go:235: (dbg) docker inspect old-k8s-version-321200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242",
	        "Created": "2024-02-26T11:37:54.399460536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288204,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:48:44.943739959Z",
	            "FinishedAt": "2024-02-26T11:48:39.811543318Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/hosts",
	        "LogPath": "/var/lib/docker/containers/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242/9e96dc767099f65e5871d1cb57b92022dfc4f7638c23fd9cd5cfc972fc621242-json.log",
	        "Name": "/old-k8s-version-321200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-321200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-321200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11-init/diff:/var/lib/docker/overlay2/a786c9685ff855515e3587508a6f2e6d7ddb83f4357560222dd23bc73e4b5ed1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/babf521722ae935ae85b94b9ef4c7966cf904617ca0e17bde085a8e31fdedd11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-321200",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-321200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-321200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-321200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7222b3f3e5ce4dac0da549da99a03d584793c96ffcfcede00bdd92b38fae1e9",
	            "SandboxKey": "/var/run/docker/netns/d7222b3f3e5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54515"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54516"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54517"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54518"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54519"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-321200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e96dc767099",
	                        "old-k8s-version-321200"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "3d8e32e292076657fa3147b08ea4473653a270d339de0a1d187a6074718ce682",
	                    "EndpointID": "c20f02b95d19bda13eac592cade6848b5588ff57ea6c759d59d9a04f07452b51",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-321200",
	                        "9e96dc767099"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 2 (1.1926989s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 12:07:29.630490    3188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-321200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-321200 logs -n 25: (1.7426458s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | iptables -t nat -L -n -v                             |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl status kubelet --all                       |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat kubelet                                |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | journalctl -xeu kubelet --all                        |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl status docker --all                        |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat docker                                 |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/docker/daemon.json                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo docker                        | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | system info                                          |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl status cri-docker                          |                |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat cri-docker                             |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | cri-dockerd --version                                |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl status containerd                          |                |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat containerd                             |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /lib/systemd/system/containerd.service               |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo cat                           | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/containerd/config.toml                          |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | containerd config dump                               |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC |                     |
	|         | systemctl status crio --all                          |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo                               | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | systemctl cat crio --no-pager                        |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo find                          | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |                   |         |                     |                     |
	| ssh     | -p kubenet-968100 sudo crio                          | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|         | config                                               |                |                   |         |                     |                     |
	| delete  | -p kubenet-968100                                    | kubenet-968100 | minikube7\jenkins | v1.32.0 | 26 Feb 24 12:01 UTC | 26 Feb 24 12:01 UTC |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:58:49
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:58:49.462776   11684 out.go:291] Setting OutFile to fd 1776 ...
	I0226 11:58:49.462776   11684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:58:49.462776   11684 out.go:304] Setting ErrFile to fd 2020...
	I0226 11:58:49.462776   11684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:58:49.486423   11684 out.go:298] Setting JSON to false
	I0226 11:58:49.493547   11684 start.go:129] hostinfo: {"hostname":"minikube7","uptime":6006,"bootTime":1708942723,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 11:58:49.493547   11684 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 11:58:49.498810   11684 out.go:177] * [kubenet-968100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 11:58:49.503131   11684 notify.go:220] Checking for updates...
	I0226 11:58:49.506715   11684 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 11:58:49.514044   11684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:58:49.519873   11684 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 11:58:49.533324   11684 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:58:49.537864   11684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:58:49.542834   11684 config.go:182] Loaded profile config "bridge-968100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:58:49.542834   11684 config.go:182] Loaded profile config "flannel-968100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:58:49.543839   11684 config.go:182] Loaded profile config "old-k8s-version-321200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 11:58:49.543839   11684 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:58:49.882500   11684 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 11:58:49.890519   11684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:58:50.312386   11684 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:58:50.264958915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:58:50.320392   11684 out.go:177] * Using the docker driver based on user configuration
	I0226 11:58:50.330385   11684 start.go:299] selected driver: docker
	I0226 11:58:50.330385   11684 start.go:903] validating driver "docker" against <nil>
	I0226 11:58:50.330385   11684 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:58:50.467384   11684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:58:50.877848   11684 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-26 11:58:50.832162825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:58:50.878772   11684 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:58:50.879946   11684 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:58:50.882708   11684 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 11:58:50.886506   11684 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0226 11:58:50.886575   11684 start_flags.go:323] config:
	{Name:kubenet-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-968100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:58:50.890996   11684 out.go:177] * Starting control plane node kubenet-968100 in cluster kubenet-968100
	I0226 11:58:50.896276   11684 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 11:58:50.902070   11684 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:58:50.909440   11684 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:58:50.909440   11684 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:58:50.909440   11684 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 11:58:50.909440   11684 cache.go:56] Caching tarball of preloaded images
	I0226 11:58:50.909440   11684 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 11:58:50.909440   11684 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0226 11:58:50.909440   11684 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\config.json ...
	I0226 11:58:50.910431   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\config.json: {Name:mk447734039250feafe4a6fa48e3612ca359a1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:58:51.125366   11684 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:58:51.125366   11684 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:58:51.125366   11684 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:58:51.125366   11684 start.go:365] acquiring machines lock for kubenet-968100: {Name:mk4d4f541c1002c737ff1cec6a45768ae16fec80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:58:51.126366   11684 start.go:369] acquired machines lock for "kubenet-968100" in 999.6µs
	I0226 11:58:51.126366   11684 start.go:93] Provisioning new machine with config: &{Name:kubenet-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-968100 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 11:58:51.126366   11684 start.go:125] createHost starting for "" (driver="docker")
	I0226 11:58:51.134358   11684 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0226 11:58:51.135387   11684 start.go:159] libmachine.API.Create for "kubenet-968100" (driver="docker")
	I0226 11:58:51.135387   11684 client.go:168] LocalClient.Create starting
	I0226 11:58:51.135387   11684 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0226 11:58:51.135387   11684 main.go:141] libmachine: Decoding PEM data...
	I0226 11:58:51.135387   11684 main.go:141] libmachine: Parsing certificate...
	I0226 11:58:51.136375   11684 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0226 11:58:51.136375   11684 main.go:141] libmachine: Decoding PEM data...
	I0226 11:58:51.136375   11684 main.go:141] libmachine: Parsing certificate...
	I0226 11:58:51.150362   11684 cli_runner.go:164] Run: docker network inspect kubenet-968100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 11:58:51.343550   11684 cli_runner.go:211] docker network inspect kubenet-968100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 11:58:51.352553   11684 network_create.go:281] running [docker network inspect kubenet-968100] to gather additional debugging logs...
	I0226 11:58:51.352553   11684 cli_runner.go:164] Run: docker network inspect kubenet-968100
	W0226 11:58:51.532691   11684 cli_runner.go:211] docker network inspect kubenet-968100 returned with exit code 1
	I0226 11:58:51.532691   11684 network_create.go:284] error running [docker network inspect kubenet-968100]: docker network inspect kubenet-968100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-968100 not found
	I0226 11:58:51.532691   11684 network_create.go:286] output of [docker network inspect kubenet-968100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-968100 not found
	
	** /stderr **
	I0226 11:58:51.542710   11684 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:58:51.762660   11684 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:58:51.787664   11684 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a3ec0}
	I0226 11:58:51.788672   11684 network_create.go:124] attempt to create docker network kubenet-968100 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0226 11:58:51.801915   11684 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-968100 kubenet-968100
	W0226 11:58:51.993135   11684 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-968100 kubenet-968100 returned with exit code 1
	W0226 11:58:51.993135   11684 network_create.go:149] failed to create docker network kubenet-968100 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-968100 kubenet-968100: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0226 11:58:51.993135   11684 network_create.go:116] failed to create docker network kubenet-968100 192.168.58.0/24, will retry: subnet is taken
	I0226 11:58:52.028123   11684 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 11:58:52.051065   11684 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002239ec0}
	I0226 11:58:52.051065   11684 network_create.go:124] attempt to create docker network kubenet-968100 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0226 11:58:52.060993   11684 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-968100 kubenet-968100
	I0226 11:58:52.401700   11684 network_create.go:108] docker network kubenet-968100 192.168.67.0/24 created
	I0226 11:58:52.401700   11684 kic.go:121] calculated static IP "192.168.67.2" for the "kubenet-968100" container
	I0226 11:58:52.429817   11684 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 11:58:52.629326   11684 cli_runner.go:164] Run: docker volume create kubenet-968100 --label name.minikube.sigs.k8s.io=kubenet-968100 --label created_by.minikube.sigs.k8s.io=true
	I0226 11:58:52.835254   11684 oci.go:103] Successfully created a docker volume kubenet-968100
	I0226 11:58:52.847112   11684 cli_runner.go:164] Run: docker run --rm --name kubenet-968100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-968100 --entrypoint /usr/bin/test -v kubenet-968100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 11:58:55.434970   11684 cli_runner.go:217] Completed: docker run --rm --name kubenet-968100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-968100 --entrypoint /usr/bin/test -v kubenet-968100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (2.5878382s)
	I0226 11:58:55.435971   11684 oci.go:107] Successfully prepared a docker volume kubenet-968100
	I0226 11:58:55.435971   11684 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:58:55.435971   11684 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 11:58:55.445968   11684 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-968100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 11:59:19.468057   11684 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-968100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (24.0219113s)
	I0226 11:59:19.475642   11684 kic.go:203] duration metric: took 24.039493 seconds to extract preloaded images to volume
	I0226 11:59:19.484643   11684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:59:19.857161   11684 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:90 SystemTime:2024-02-26 11:59:19.818461467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 11:59:19.869979   11684 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 11:59:20.243542   11684 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-968100 --name kubenet-968100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-968100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-968100 --network kubenet-968100 --ip 192.168.67.2 --volume kubenet-968100:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 11:59:22.135281   11684 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-968100 --name kubenet-968100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-968100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-968100 --network kubenet-968100 --ip 192.168.67.2 --volume kubenet-968100:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf: (1.8907097s)
	I0226 11:59:22.147281   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Running}}
	I0226 11:59:22.355740   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 11:59:22.525700   11684 cli_runner.go:164] Run: docker exec kubenet-968100 stat /var/lib/dpkg/alternatives/iptables
	I0226 11:59:22.816214   11684 oci.go:144] the created container "kubenet-968100" has a running status.
	I0226 11:59:22.816214   11684 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa...
	I0226 11:59:23.024213   11684 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 11:59:23.567345   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 11:59:23.804327   11684 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 11:59:23.804327   11684 kic_runner.go:114] Args: [docker exec --privileged kubenet-968100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 11:59:24.110802   11684 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa...
	I0226 11:59:27.017532   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 11:59:27.209192   11684 machine.go:88] provisioning docker machine ...
	I0226 11:59:27.209192   11684 ubuntu.go:169] provisioning hostname "kubenet-968100"
	I0226 11:59:27.217186   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:27.433469   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:27.444745   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:27.444745   11684 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubenet-968100 && echo "kubenet-968100" | sudo tee /etc/hostname
	I0226 11:59:27.680961   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-968100
	
	I0226 11:59:27.692100   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:27.890314   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:27.891293   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:27.891293   11684 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-968100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-968100/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-968100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 11:59:28.103109   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 11:59:28.103261   11684 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0226 11:59:28.103261   11684 ubuntu.go:177] setting up certificates
	I0226 11:59:28.103261   11684 provision.go:83] configureAuth start
	I0226 11:59:28.117214   11684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-968100
	I0226 11:59:28.322297   11684 provision.go:138] copyHostCerts
	I0226 11:59:28.323358   11684 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0226 11:59:28.323358   11684 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0226 11:59:28.324115   11684 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0226 11:59:28.325658   11684 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0226 11:59:28.325754   11684 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0226 11:59:28.326133   11684 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0226 11:59:28.327417   11684 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0226 11:59:28.327417   11684 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0226 11:59:28.328106   11684 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0226 11:59:28.329676   11684 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-968100 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-968100]
	I0226 11:59:28.584753   11684 provision.go:172] copyRemoteCerts
	I0226 11:59:28.601991   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 11:59:28.612757   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:28.788916   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:28.941271   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0226 11:59:28.999437   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0226 11:59:29.065616   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 11:59:29.120202   11684 provision.go:86] duration metric: configureAuth took 1.0168767s
	I0226 11:59:29.120258   11684 ubuntu.go:193] setting minikube options for container-runtime
	I0226 11:59:29.120926   11684 config.go:182] Loaded profile config "kubenet-968100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:59:29.137779   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:29.337368   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:29.338361   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:29.338361   11684 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 11:59:29.534297   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 11:59:29.534297   11684 ubuntu.go:71] root file system type: overlay
	I0226 11:59:29.534297   11684 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 11:59:29.551278   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:29.732038   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:29.732038   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:29.732038   11684 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 11:59:29.955964   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 11:59:29.967185   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:30.156050   11684 main.go:141] libmachine: Using SSH client type: native
	I0226 11:59:30.156439   11684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1119d80] 0x111c960 <nil>  [] 0s} 127.0.0.1 55862 <nil> <nil>}
	I0226 11:59:30.156439   11684 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 11:59:32.081855   11684 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-26 11:59:29.943107196 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0226 11:59:32.081855   11684 machine.go:91] provisioned docker machine in 4.8726267s
	I0226 11:59:32.081855   11684 client.go:171] LocalClient.Create took 40.9461651s
	I0226 11:59:32.081855   11684 start.go:167] duration metric: libmachine.API.Create for "kubenet-968100" took 40.9461651s
	I0226 11:59:32.081855   11684 start.go:300] post-start starting for "kubenet-968100" (driver="docker")
	I0226 11:59:32.081855   11684 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 11:59:32.094850   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 11:59:32.104850   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:32.283589   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:32.449860   11684 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 11:59:32.462064   11684 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 11:59:32.462064   11684 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 11:59:32.462064   11684 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 11:59:32.462064   11684 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 11:59:32.462064   11684 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0226 11:59:32.462064   11684 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0226 11:59:32.463062   11684 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem -> 118682.pem in /etc/ssl/certs
	I0226 11:59:32.476060   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 11:59:32.520389   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /etc/ssl/certs/118682.pem (1708 bytes)
	I0226 11:59:32.590619   11684 start.go:303] post-start completed in 508.7601ms
	I0226 11:59:32.605621   11684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-968100
	I0226 11:59:32.817425   11684 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\config.json ...
	I0226 11:59:32.836756   11684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:59:32.844752   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:33.048505   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:33.188521   11684 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 11:59:33.201507   11684 start.go:128] duration metric: createHost completed in 42.0748295s
	I0226 11:59:33.201507   11684 start.go:83] releasing machines lock for "kubenet-968100", held for 42.0748295s
	I0226 11:59:33.210537   11684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-968100
	I0226 11:59:33.413848   11684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 11:59:33.422753   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:33.423742   11684 ssh_runner.go:195] Run: cat /version.json
	I0226 11:59:33.431741   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:33.613974   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:33.626972   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 11:59:33.991964   11684 ssh_runner.go:195] Run: systemctl --version
	I0226 11:59:34.024458   11684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 11:59:34.059268   11684 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0226 11:59:34.079195   11684 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0226 11:59:34.093193   11684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 11:59:34.173421   11684 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0226 11:59:34.173421   11684 start.go:475] detecting cgroup driver to use...
	I0226 11:59:34.173421   11684 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:59:34.173873   11684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:59:34.216452   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0226 11:59:34.246782   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 11:59:34.275336   11684 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 11:59:34.287821   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 11:59:34.320149   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:59:34.352353   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 11:59:34.398474   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 11:59:34.431275   11684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 11:59:34.469281   11684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 11:59:34.586786   11684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 11:59:34.687642   11684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 11:59:34.719877   11684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:59:35.003749   11684 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 11:59:35.143232   11684 start.go:475] detecting cgroup driver to use...
	I0226 11:59:35.143347   11684 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:59:35.164107   11684 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 11:59:35.196859   11684 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 11:59:35.208472   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 11:59:35.236459   11684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:59:35.310241   11684 ssh_runner.go:195] Run: which cri-dockerd
	I0226 11:59:35.332243   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 11:59:35.361165   11684 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (193 bytes)
	I0226 11:59:35.420146   11684 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 11:59:35.634748   11684 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 11:59:35.777039   11684 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 11:59:35.777254   11684 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 11:59:35.828157   11684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:59:35.981088   11684 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 11:59:36.715732   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0226 11:59:36.971473   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 11:59:37.010790   11684 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0226 11:59:37.207519   11684 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0226 11:59:37.375246   11684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:59:37.530464   11684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0226 11:59:37.568762   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 11:59:37.606256   11684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:59:37.761173   11684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0226 11:59:37.932692   11684 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0226 11:59:37.946224   11684 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0226 11:59:37.958373   11684 start.go:543] Will wait 60s for crictl version
	I0226 11:59:37.978346   11684 ssh_runner.go:195] Run: which crictl
	I0226 11:59:38.001282   11684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 11:59:38.107823   11684 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0226 11:59:38.117383   11684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:59:38.192471   11684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 11:59:38.242072   11684 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0226 11:59:38.252438   11684 cli_runner.go:164] Run: docker exec -t kubenet-968100 dig +short host.docker.internal
	I0226 11:59:38.507517   11684 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 11:59:38.523561   11684 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 11:59:38.538099   11684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:59:38.568506   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-968100
	I0226 11:59:38.766723   11684 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 11:59:38.776638   11684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:59:38.819588   11684 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 11:59:38.819588   11684 docker.go:615] Images already preloaded, skipping extraction
	I0226 11:59:38.829701   11684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 11:59:38.880713   11684 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 11:59:38.880713   11684 cache_images.go:84] Images are preloaded, skipping loading
	I0226 11:59:38.893595   11684 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 11:59:39.007223   11684 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0226 11:59:39.007223   11684 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:59:39.007223   11684 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-968100 NodeName:kubenet-968100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 11:59:39.007223   11684 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-968100"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:59:39.007223   11684 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubenet-968100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kubenet-968100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:59:39.020587   11684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 11:59:39.041934   11684 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:59:39.054628   11684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:59:39.075875   11684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes)
	I0226 11:59:39.106085   11684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 11:59:39.136497   11684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0226 11:59:39.182369   11684 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:59:39.196951   11684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:59:39.215790   11684 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100 for IP: 192.168.67.2
	I0226 11:59:39.215790   11684 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.216597   11684 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0226 11:59:39.216867   11684 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0226 11:59:39.217555   11684 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.key
	I0226 11:59:39.217671   11684 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt with IP's: []
	I0226 11:59:39.486729   11684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt ...
	I0226 11:59:39.486729   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.crt: {Name:mk0c4c5b5f6bf83cc7f3221d74996d34e7e9722c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.488240   11684 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.key ...
	I0226 11:59:39.488240   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\client.key: {Name:mkbba140057490f59c3bf6f4aab1ab4141707741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.488578   11684 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key.c7fa3a9e
	I0226 11:59:39.489601   11684 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 11:59:39.574808   11684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt.c7fa3a9e ...
	I0226 11:59:39.574808   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt.c7fa3a9e: {Name:mke3e48a6ec59f2fb3fc7f0a538c8a0fd45851f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.576759   11684 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key.c7fa3a9e ...
	I0226 11:59:39.576759   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key.c7fa3a9e: {Name:mkf5ad641787e565fa40ed23ba72170388f003f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.578279   11684 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt
	I0226 11:59:39.587286   11684 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key
	I0226 11:59:39.587964   11684 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.key
	I0226 11:59:39.589003   11684 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.crt with IP's: []
	I0226 11:59:39.753658   11684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.crt ...
	I0226 11:59:39.753658   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.crt: {Name:mk70f1435fcc5d980ede8ca3f74b6fbaacaeb591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.754682   11684 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.key ...
	I0226 11:59:39.754682   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.key: {Name:mk4c7d5ad5f032a84d020dd948a761163af3cbcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:59:39.765460   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem (1338 bytes)
	W0226 11:59:39.765460   11684 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868_empty.pem, impossibly tiny 0 bytes
	I0226 11:59:39.765460   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0226 11:59:39.766506   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0226 11:59:39.766506   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0226 11:59:39.766506   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0226 11:59:39.766506   11684 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem (1708 bytes)
	I0226 11:59:39.768500   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:59:39.809947   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 11:59:39.856185   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:59:39.895390   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-968100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 11:59:39.937313   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:59:39.978269   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:59:40.019062   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:59:40.066354   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:59:40.105811   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:59:40.150210   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\11868.pem --> /usr/share/ca-certificates/11868.pem (1338 bytes)
	I0226 11:59:40.190172   11684 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\118682.pem --> /usr/share/ca-certificates/118682.pem (1708 bytes)
	I0226 11:59:40.230523   11684 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:59:40.272735   11684 ssh_runner.go:195] Run: openssl version
	I0226 11:59:40.296477   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:59:40.327587   11684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:59:40.338489   11684 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:28 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:59:40.350183   11684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:59:40.377607   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:59:40.409835   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11868.pem && ln -fs /usr/share/ca-certificates/11868.pem /etc/ssl/certs/11868.pem"
	I0226 11:59:40.439708   11684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11868.pem
	I0226 11:59:40.451398   11684 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:37 /usr/share/ca-certificates/11868.pem
	I0226 11:59:40.464412   11684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11868.pem
	I0226 11:59:40.488359   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11868.pem /etc/ssl/certs/51391683.0"
	I0226 11:59:40.517952   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118682.pem && ln -fs /usr/share/ca-certificates/118682.pem /etc/ssl/certs/118682.pem"
	I0226 11:59:40.548360   11684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118682.pem
	I0226 11:59:40.559034   11684 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:37 /usr/share/ca-certificates/118682.pem
	I0226 11:59:40.572549   11684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118682.pem
	I0226 11:59:40.601241   11684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118682.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 11:59:40.633143   11684 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:59:40.643786   11684 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 11:59:40.644047   11684 kubeadm.go:404] StartCluster: {Name:kubenet-968100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-968100 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:59:40.653403   11684 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 11:59:40.706236   11684 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:59:40.738787   11684 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 11:59:40.758174   11684 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:59:40.768691   11684 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:59:40.787307   11684 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:59:40.787374   11684 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:59:40.958429   11684 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0226 11:59:41.108408   11684 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:59:57.415748   11684 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0226 11:59:57.415886   11684 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:59:57.416131   11684 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:59:57.416131   11684 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:59:57.416131   11684 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:59:57.416909   11684 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:59:57.418960   11684 out.go:204]   - Generating certificates and keys ...
	I0226 11:59:57.419850   11684 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:59:57.419928   11684 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:59:57.419928   11684 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 11:59:57.419928   11684 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 11:59:57.420567   11684 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 11:59:57.420640   11684 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 11:59:57.420768   11684 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 11:59:57.420969   11684 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubenet-968100 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 11:59:57.421211   11684 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 11:59:57.421565   11684 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubenet-968100 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 11:59:57.421565   11684 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 11:59:57.421565   11684 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 11:59:57.421565   11684 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 11:59:57.422121   11684 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:59:57.422184   11684 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:59:57.422350   11684 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:59:57.422701   11684 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:59:57.422828   11684 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:59:57.423020   11684 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:59:57.423278   11684 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:59:57.425919   11684 out.go:204]   - Booting up control plane ...
	I0226 11:59:57.426215   11684 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:59:57.426445   11684 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:59:57.426593   11684 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:59:57.426593   11684 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:59:57.427171   11684 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:59:57.427391   11684 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 11:59:57.427507   11684 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:59:57.427507   11684 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.505149 seconds
	I0226 11:59:57.428057   11684 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0226 11:59:57.428057   11684 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0226 11:59:57.428057   11684 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0226 11:59:57.428794   11684 kubeadm.go:322] [mark-control-plane] Marking the node kubenet-968100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0226 11:59:57.428794   11684 kubeadm.go:322] [bootstrap-token] Using token: 2nstr1.vudotf4rhd8prt3r
	I0226 11:59:57.433580   11684 out.go:204]   - Configuring RBAC rules ...
	I0226 11:59:57.433865   11684 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0226 11:59:57.433865   11684 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0226 11:59:57.433865   11684 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0226 11:59:57.434813   11684 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0226 11:59:57.435152   11684 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0226 11:59:57.435152   11684 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0226 11:59:57.435152   11684 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0226 11:59:57.435712   11684 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0226 11:59:57.435932   11684 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0226 11:59:57.435932   11684 kubeadm.go:322] 
	I0226 11:59:57.435932   11684 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0226 11:59:57.435932   11684 kubeadm.go:322] 
	I0226 11:59:57.435932   11684 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0226 11:59:57.435932   11684 kubeadm.go:322] 
	I0226 11:59:57.435932   11684 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0226 11:59:57.436468   11684 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0226 11:59:57.436587   11684 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0226 11:59:57.436587   11684 kubeadm.go:322] 
	I0226 11:59:57.436587   11684 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0226 11:59:57.436587   11684 kubeadm.go:322] 
	I0226 11:59:57.436587   11684 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0226 11:59:57.436587   11684 kubeadm.go:322] 
	I0226 11:59:57.437158   11684 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0226 11:59:57.437304   11684 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0226 11:59:57.437304   11684 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0226 11:59:57.437470   11684 kubeadm.go:322] 
	I0226 11:59:57.437671   11684 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0226 11:59:57.437981   11684 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0226 11:59:57.437981   11684 kubeadm.go:322] 
	I0226 11:59:57.437981   11684 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2nstr1.vudotf4rhd8prt3r \
	I0226 11:59:57.437981   11684 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:692c5187086ebf6703a77455376cf1d08082795eb601b6b948c6cdfd3f4f8e8d \
	I0226 11:59:57.438588   11684 kubeadm.go:322] 	--control-plane 
	I0226 11:59:57.438588   11684 kubeadm.go:322] 
	I0226 11:59:57.438588   11684 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0226 11:59:57.438588   11684 kubeadm.go:322] 
	I0226 11:59:57.438588   11684 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2nstr1.vudotf4rhd8prt3r \
	I0226 11:59:57.439233   11684 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:692c5187086ebf6703a77455376cf1d08082795eb601b6b948c6cdfd3f4f8e8d 
	I0226 11:59:57.439233   11684 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0226 11:59:57.439233   11684 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 11:59:57.457199   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6 minikube.k8s.io/name=kubenet-968100 minikube.k8s.io/updated_at=2024_02_26T11_59_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:57.458727   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:57.478022   11684 ops.go:34] apiserver oom_adj: -16
	I0226 11:59:58.095212   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:58.601901   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:59.112511   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:59:59.601732   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:00.092331   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:00.596749   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:01.100947   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:01.605414   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:02.098988   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:02.601091   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:03.094554   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:03.601664   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:04.108587   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:04.594260   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:05.101399   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:05.605636   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:06.094766   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:06.599985   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:07.092311   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:07.594463   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:08.094947   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:08.598990   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:09.107874   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:09.599780   11684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:00:09.804019   11684 kubeadm.go:1088] duration metric: took 12.3646948s to wait for elevateKubeSystemPrivileges.
	I0226 12:00:09.804019   11684 kubeadm.go:406] StartCluster complete in 29.1597569s
	I0226 12:00:09.804019   11684 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:00:09.804711   11684 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 12:00:09.806536   11684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:00:09.808582   11684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 12:00:09.808679   11684 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 12:00:09.808801   11684 addons.go:69] Setting storage-provisioner=true in profile "kubenet-968100"
	I0226 12:00:09.808801   11684 addons.go:234] Setting addon storage-provisioner=true in "kubenet-968100"
	I0226 12:00:09.808801   11684 addons.go:69] Setting default-storageclass=true in profile "kubenet-968100"
	I0226 12:00:09.808801   11684 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-968100"
	I0226 12:00:09.808801   11684 host.go:66] Checking if "kubenet-968100" exists ...
	I0226 12:00:09.808801   11684 config.go:182] Loaded profile config "kubenet-968100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 12:00:09.834322   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 12:00:09.837013   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 12:00:10.028525   11684 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 12:00:10.031125   11684 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 12:00:10.031125   11684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 12:00:10.044822   11684 addons.go:234] Setting addon default-storageclass=true in "kubenet-968100"
	I0226 12:00:10.044883   11684 host.go:66] Checking if "kubenet-968100" exists ...
	I0226 12:00:10.044883   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 12:00:10.072331   11684 cli_runner.go:164] Run: docker container inspect kubenet-968100 --format={{.State.Status}}
	I0226 12:00:10.229991   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 12:00:10.265380   11684 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 12:00:10.265380   11684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 12:00:10.280518   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-968100
	I0226 12:00:10.426779   11684 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubenet-968100" context rescaled to 1 replicas
	I0226 12:00:10.426779   11684 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 12:00:10.430766   11684 out.go:177] * Verifying Kubernetes components...
	I0226 12:00:10.446754   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:00:10.456914   11684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0226 12:00:10.463837   11684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-968100\id_rsa Username:docker}
	I0226 12:00:10.491297   11684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-968100
	I0226 12:00:10.600727   11684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 12:00:10.684315   11684 node_ready.go:35] waiting up to 15m0s for node "kubenet-968100" to be "Ready" ...
	I0226 12:00:10.756874   11684 node_ready.go:49] node "kubenet-968100" has status "Ready":"True"
	I0226 12:00:10.756919   11684 node_ready.go:38] duration metric: took 72.5286ms waiting for node "kubenet-968100" to be "Ready" ...
	I0226 12:00:10.756995   11684 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:00:10.781104   11684 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-8d5fm" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:10.894572   11684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 12:00:12.969804   11684 pod_ready.go:102] pod "coredns-5dd5756b68-8d5fm" in "kube-system" namespace has status "Ready":"False"
	I0226 12:00:14.270352   11684 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.8134105s)
	I0226 12:00:14.270490   11684 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0226 12:00:14.558441   11684 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.9576848s)
	I0226 12:00:14.558441   11684 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.6636543s)
	I0226 12:00:14.586624   11684 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0226 12:00:14.588699   11684 addons.go:505] enable addons completed in 4.7799843s: enabled=[storage-provisioner default-storageclass]
	I0226 12:00:14.805460   11684 pod_ready.go:92] pod "coredns-5dd5756b68-8d5fm" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:14.805513   11684 pod_ready.go:81] duration metric: took 4.0242069s waiting for pod "coredns-5dd5756b68-8d5fm" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:14.805513   11684 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:16.834722   11684 pod_ready.go:102] pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status "Ready":"False"
	I0226 12:00:19.335948   11684 pod_ready.go:102] pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status "Ready":"False"
	I0226 12:00:21.834406   11684 pod_ready.go:102] pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status "Ready":"False"
	I0226 12:00:24.329289   11684 pod_ready.go:97] pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-26 12:00:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSta
teTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-26 12:00:13 +0000 UTC,FinishedAt:2024-02-26 12:00:23 +0000 UTC,ContainerID:docker://2d270b7df5d60f7d8d3750b0e300d791d8a6bfb53604ead42a8531753edbe410,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://2d270b7df5d60f7d8d3750b0e300d791d8a6bfb53604ead42a8531753edbe410 Started:0xc002bb4ad0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0226 12:00:24.329289   11684 pod_ready.go:81] duration metric: took 9.5237055s waiting for pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace to be "Ready" ...
	E0226 12:00:24.329289   11684 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-bpt97" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 12:00:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-26 12:00:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running
:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-26 12:00:13 +0000 UTC,FinishedAt:2024-02-26 12:00:23 +0000 UTC,ContainerID:docker://2d270b7df5d60f7d8d3750b0e300d791d8a6bfb53604ead42a8531753edbe410,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://2d270b7df5d60f7d8d3750b0e300d791d8a6bfb53604ead42a8531753edbe410 Started:0xc002bb4ad0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0226 12:00:24.329289   11684 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.346720   11684 pod_ready.go:92] pod "etcd-kubenet-968100" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.346720   11684 pod_ready.go:81] duration metric: took 17.2377ms waiting for pod "etcd-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.346720   11684 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.360255   11684 pod_ready.go:92] pod "kube-apiserver-kubenet-968100" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.360255   11684 pod_ready.go:81] duration metric: took 13.535ms waiting for pod "kube-apiserver-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.360255   11684 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.375744   11684 pod_ready.go:92] pod "kube-controller-manager-kubenet-968100" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.375815   11684 pod_ready.go:81] duration metric: took 15.4889ms waiting for pod "kube-controller-manager-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.375815   11684 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-mz5j7" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.401167   11684 pod_ready.go:92] pod "kube-proxy-mz5j7" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.401167   11684 pod_ready.go:81] duration metric: took 25.3516ms waiting for pod "kube-proxy-mz5j7" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.401722   11684 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.722691   11684 pod_ready.go:92] pod "kube-scheduler-kubenet-968100" in "kube-system" namespace has status "Ready":"True"
	I0226 12:00:24.722691   11684 pod_ready.go:81] duration metric: took 320.9674ms waiting for pod "kube-scheduler-kubenet-968100" in "kube-system" namespace to be "Ready" ...
	I0226 12:00:24.722691   11684 pod_ready.go:38] duration metric: took 13.9655934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:00:24.722786   11684 api_server.go:52] waiting for apiserver process to appear ...
	I0226 12:00:24.736874   11684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 12:00:24.765340   11684 api_server.go:72] duration metric: took 14.3384547s to wait for apiserver process to appear ...
	I0226 12:00:24.765375   11684 api_server.go:88] waiting for apiserver healthz status ...
	I0226 12:00:24.765415   11684 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55861/healthz ...
	I0226 12:00:24.787487   11684 api_server.go:279] https://127.0.0.1:55861/healthz returned 200:
	ok
	I0226 12:00:24.793024   11684 api_server.go:141] control plane version: v1.28.4
	I0226 12:00:24.793109   11684 api_server.go:131] duration metric: took 27.694ms to wait for apiserver health ...
	I0226 12:00:24.793109   11684 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 12:00:24.942541   11684 system_pods.go:59] 7 kube-system pods found
	I0226 12:00:24.942541   11684 system_pods.go:61] "coredns-5dd5756b68-8d5fm" [44abce95-9488-4f8d-b4f7-3957a218aee2] Running
	I0226 12:00:24.942624   11684 system_pods.go:61] "etcd-kubenet-968100" [615298a9-cf9c-4996-9bef-9d73dd60c158] Running
	I0226 12:00:24.942624   11684 system_pods.go:61] "kube-apiserver-kubenet-968100" [42272ac0-93d3-4cd6-ba29-5b8391251399] Running
	I0226 12:00:24.942624   11684 system_pods.go:61] "kube-controller-manager-kubenet-968100" [66df9813-8c0f-4139-a21b-44fb8f435401] Running
	I0226 12:00:24.942624   11684 system_pods.go:61] "kube-proxy-mz5j7" [08d21e16-bfd3-435b-b021-dc6a157c5527] Running
	I0226 12:00:24.942728   11684 system_pods.go:61] "kube-scheduler-kubenet-968100" [fea9e51e-6f10-4817-9ded-7e4c809359a6] Running
	I0226 12:00:24.942728   11684 system_pods.go:61] "storage-provisioner" [960a8845-f6dc-4d2a-8647-2ec83adf88de] Running
	I0226 12:00:24.942793   11684 system_pods.go:74] duration metric: took 149.6238ms to wait for pod list to return data ...
	I0226 12:00:24.942821   11684 default_sa.go:34] waiting for default service account to be created ...
	I0226 12:00:25.123320   11684 default_sa.go:45] found service account: "default"
	I0226 12:00:25.123854   11684 default_sa.go:55] duration metric: took 180.969ms for default service account to be created ...
	I0226 12:00:25.123854   11684 system_pods.go:116] waiting for k8s-apps to be running ...
	I0226 12:00:25.334693   11684 system_pods.go:86] 7 kube-system pods found
	I0226 12:00:25.334693   11684 system_pods.go:89] "coredns-5dd5756b68-8d5fm" [44abce95-9488-4f8d-b4f7-3957a218aee2] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "etcd-kubenet-968100" [615298a9-cf9c-4996-9bef-9d73dd60c158] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "kube-apiserver-kubenet-968100" [42272ac0-93d3-4cd6-ba29-5b8391251399] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "kube-controller-manager-kubenet-968100" [66df9813-8c0f-4139-a21b-44fb8f435401] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "kube-proxy-mz5j7" [08d21e16-bfd3-435b-b021-dc6a157c5527] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "kube-scheduler-kubenet-968100" [fea9e51e-6f10-4817-9ded-7e4c809359a6] Running
	I0226 12:00:25.334693   11684 system_pods.go:89] "storage-provisioner" [960a8845-f6dc-4d2a-8647-2ec83adf88de] Running
	I0226 12:00:25.334693   11684 system_pods.go:126] duration metric: took 210.8376ms to wait for k8s-apps to be running ...
	I0226 12:00:25.334693   11684 system_svc.go:44] waiting for kubelet service to be running ....
	I0226 12:00:25.345514   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:00:25.373071   11684 system_svc.go:56] duration metric: took 38.3777ms WaitForService to wait for kubelet.
	I0226 12:00:25.373071   11684 kubeadm.go:581] duration metric: took 14.9461813s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0226 12:00:25.373071   11684 node_conditions.go:102] verifying NodePressure condition ...
	I0226 12:00:25.534679   11684 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0226 12:00:25.534679   11684 node_conditions.go:123] node cpu capacity is 16
	I0226 12:00:25.534679   11684 node_conditions.go:105] duration metric: took 161.6068ms to run NodePressure ...
	I0226 12:00:25.535223   11684 start.go:228] waiting for startup goroutines ...
	I0226 12:00:25.535223   11684 start.go:233] waiting for cluster config update ...
	I0226 12:00:25.535223   11684 start.go:242] writing updated cluster config ...
	I0226 12:00:25.546875   11684 ssh_runner.go:195] Run: rm -f paused
	I0226 12:00:25.681754   11684 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0226 12:00:25.687316   11684 out.go:177] * Done! kubectl is now configured to use "kubenet-968100" cluster and "default" namespace by default
	I0226 12:02:02.411780   10808 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 12:02:02.412124   10808 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 12:02:02.419498   10808 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 12:02:02.419498   10808 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 12:02:02.420047   10808 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 12:02:02.420163   10808 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 12:02:02.420163   10808 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 12:02:02.420801   10808 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 12:02:02.421076   10808 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 12:02:02.421178   10808 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 12:02:02.421395   10808 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 12:02:02.424697   10808 out.go:204]   - Generating certificates and keys ...
	I0226 12:02:02.425833   10808 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 12:02:02.426007   10808 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 12:02:02.426252   10808 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 12:02:02.426462   10808 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 12:02:02.426621   10808 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 12:02:02.426771   10808 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 12:02:02.426995   10808 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 12:02:02.427148   10808 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 12:02:02.427347   10808 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 12:02:02.427499   10808 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 12:02:02.427606   10808 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 12:02:02.427782   10808 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 12:02:02.427967   10808 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 12:02:02.428065   10808 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 12:02:02.428157   10808 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 12:02:02.430918   10808 out.go:204]   - Booting up control plane ...
	I0226 12:02:02.431350   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 12:02:02.431401   10808 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 12:02:02.432186   10808 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 12:02:02.432186   10808 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 12:02:02.432186   10808 kubeadm.go:322] 
	I0226 12:02:02.432486   10808 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 12:02:02.432535   10808 kubeadm.go:322] 	timed out waiting for the condition
	I0226 12:02:02.432624   10808 kubeadm.go:322] 
	I0226 12:02:02.432759   10808 kubeadm.go:322] This error is likely caused by:
	I0226 12:02:02.432855   10808 kubeadm.go:322] 	- The kubelet is not running
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 12:02:02.433010   10808 kubeadm.go:322] 
	I0226 12:02:02.433010   10808 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 12:02:02.433010   10808 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 12:02:02.433010   10808 kubeadm.go:322] 
	I0226 12:02:02.433539   10808 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 12:02:02.433913   10808 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 12:02:02.434165   10808 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 12:02:02.434297   10808 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 12:02:02.434487   10808 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 12:02:02.434487   10808 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 12:02:02.434860   10808 kubeadm.go:406] StartCluster complete in 12m33.1269006s
	I0226 12:02:02.442099   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 12:02:02.490569   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.490672   10808 logs.go:278] No container was found matching "kube-apiserver"
	I0226 12:02:02.502331   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 12:02:02.541142   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.541142   10808 logs.go:278] No container was found matching "etcd"
	I0226 12:02:02.550354   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 12:02:02.587881   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.587881   10808 logs.go:278] No container was found matching "coredns"
	I0226 12:02:02.596635   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 12:02:02.635750   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.635846   10808 logs.go:278] No container was found matching "kube-scheduler"
	I0226 12:02:02.636707   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 12:02:02.683458   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.683458   10808 logs.go:278] No container was found matching "kube-proxy"
	I0226 12:02:02.692816   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 12:02:02.730653   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.730653   10808 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 12:02:02.739810   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 12:02:02.776933   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.776933   10808 logs.go:278] No container was found matching "kindnet"
	I0226 12:02:02.791523   10808 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 12:02:02.829156   10808 logs.go:276] 0 containers: []
	W0226 12:02:02.829359   10808 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 12:02:02.829359   10808 logs.go:123] Gathering logs for kubelet ...
	I0226 12:02:02.829359   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 12:02:02.873211   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:39 old-k8s-version-321200 kubelet[11360]: E0226 12:01:39.083157   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 12:02:02.878537   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.054143   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 12:02:02.879201   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.055459   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0226 12:02:02.898957   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:50 old-k8s-version-321200 kubelet[11360]: E0226 12:01:50.055188   11360 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-321200_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0226 12:02:02.901305   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:51 old-k8s-version-321200 kubelet[11360]: E0226 12:01:51.050143   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0226 12:02:02.913165   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:56 old-k8s-version-321200 kubelet[11360]: E0226 12:01:56.056683   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0226 12:02:02.913165   10808 logs.go:138] Found kubelet problem: Feb 26 12:01:56 old-k8s-version-321200 kubelet[11360]: E0226 12:01:56.058255   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 12:02:02.925848   10808 logs.go:123] Gathering logs for dmesg ...
	I0226 12:02:02.925848   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 12:02:02.960790   10808 logs.go:123] Gathering logs for describe nodes ...
	I0226 12:02:02.960790   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 12:02:03.127567   10808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 12:02:03.127567   10808 logs.go:123] Gathering logs for Docker ...
	I0226 12:02:03.127567   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 12:02:03.180152   10808 logs.go:123] Gathering logs for container status ...
	I0226 12:02:03.180152   10808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0226 12:02:03.265121   10808 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 12:02:03.266022   10808 out.go:239] * 
	W0226 12:02:03.266022   10808 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 12:02:03.266022   10808 out.go:239] * 
	W0226 12:02:03.267771   10808 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 12:02:03.272116   10808 out.go:177] X Problems detected in kubelet:
	I0226 12:02:03.277200   10808 out.go:177]   Feb 26 12:01:39 old-k8s-version-321200 kubelet[11360]: E0226 12:01:39.083157   11360 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-old-k8s-version-321200_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0226 12:02:03.282640   10808 out.go:177]   Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.054143   11360 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-321200_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0226 12:02:03.288461   10808 out.go:177]   Feb 26 12:01:41 old-k8s-version-321200 kubelet[11360]: E0226 12:01:41.055459   11360 pod_workers.go:191] Error syncing pod 3409fb114a897785c2d1b6e0564bbb20 ("etcd-old-k8s-version-321200_kube-system(3409fb114a897785c2d1b6e0564bbb20)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0226 12:02:03.295212   10808 out.go:177] 
	W0226 12:02:03.297284   10808 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 12:02:03.297284   10808 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 12:02:03.297284   10808 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 12:02:03.301974   10808 out.go:177] 
	
	
	==> Docker <==
	Feb 26 11:49:19 old-k8s-version-321200 systemd[1]: docker.service: Deactivated successfully.
	Feb 26 11:49:19 old-k8s-version-321200 systemd[1]: Stopped Docker Application Container Engine.
	Feb 26 11:49:19 old-k8s-version-321200 systemd[1]: Starting Docker Application Container Engine...
	Feb 26 11:49:19 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:19.396489498Z" level=info msg="Starting up"
	Feb 26 11:49:20 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:20.314342121Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 26 11:49:25 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:25.524607951Z" level=info msg="Loading containers: start."
	Feb 26 11:49:25 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:25.902105685Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.009629742Z" level=info msg="Loading containers: done."
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165501852Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165622656Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165636856Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165644257Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165670758Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.165728759Z" level=info msg="Daemon has completed initialization"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.235451801Z" level=info msg="API listen on [::]:2376"
	Feb 26 11:49:26 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:49:26.235470401Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 26 11:49:26 old-k8s-version-321200 systemd[1]: Started Docker Application Container Engine.
	Feb 26 11:53:52 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:53:52.698379318Z" level=info msg="ignoring event" container=22eef693a3b25b970c3f15b213dae642250f97f419baab1b95e305257b0bf337 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:53:53 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:53:53.154075076Z" level=info msg="ignoring event" container=9f64d56ec69fdb95b0a13228a04c2b050bed331d09cfc2f97ba7579f488e520e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:53:53 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:53:53.609017011Z" level=info msg="ignoring event" container=3973f89914b8e77a02331ea5c13ddc208683027d2e5256a9e3d8bfe136978f77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:53:53 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:53:53.963497669Z" level=info msg="ignoring event" container=8f606e19a68cee783b718f9730ffa0d7ab6495fc67bdbba4a348ca1e4e2ab259 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:57:58 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:57:58.547150857Z" level=info msg="ignoring event" container=60471072de846312b8f561a812967c588c28956c65d9e28c8ae15470fcf390d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:57:59 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:57:59.052701708Z" level=info msg="ignoring event" container=0e678e59ced845ab74f0a63cd8d32af6aac57e6129e64fbcb469871b01b46009 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:57:59 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:57:59.309964423Z" level=info msg="ignoring event" container=d4761dcb7c564c98bf5f4a32e7c2b23e0c8ceaec1b2771b6decea1c8e45b8fb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:57:59 old-k8s-version-321200 dockerd[1102]: time="2024-02-26T11:57:59.525063459Z" level=info msg="ignoring event" container=def1fa160e71067194fb930c3284b0eac9ba724960317fa85b3f262024ce625c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb26 11:48] hrtimer: interrupt took 2869333 ns
	
	
	==> kernel <==
	 12:07:32 up  1:48,  0 users,  load average: 0.09, 1.42, 3.46
	Linux old-k8s-version-321200 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 26 12:07:30 old-k8s-version-321200 kubelet[11360]: E0226 12:07:30.779346   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:30 old-k8s-version-321200 kubelet[11360]: E0226 12:07:30.878274   11360 kubelet_node_status.go:94] Unable to register node "old-k8s-version-321200" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:07:30 old-k8s-version-321200 kubelet[11360]: E0226 12:07:30.880379   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:30 old-k8s-version-321200 kubelet[11360]: E0226 12:07:30.981102   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.078452   11360 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.082010   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.182731   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.278220   11360 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.283367   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.384101   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.478928   11360 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-321200&limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.484773   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.585433   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.677670   11360 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-321200&limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.686612   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.787136   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.878658   11360 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.888076   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:31 old-k8s-version-321200 kubelet[11360]: E0226 12:07:31.988828   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:32 old-k8s-version-321200 kubelet[11360]: E0226 12:07:32.080766   11360 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:07:32 old-k8s-version-321200 kubelet[11360]: E0226 12:07:32.089350   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:32 old-k8s-version-321200 kubelet[11360]: E0226 12:07:32.190176   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:32 old-k8s-version-321200 kubelet[11360]: E0226 12:07:32.280071   11360 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 26 12:07:32 old-k8s-version-321200 kubelet[11360]: E0226 12:07:32.291081   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	Feb 26 12:07:32 old-k8s-version-321200 kubelet[11360]: E0226 12:07:32.391804   11360 kubelet.go:2267] node "old-k8s-version-321200" not found
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 12:07:30.816547   12664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-321200 -n old-k8s-version-321200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 2 (1.1832313s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 12:07:32.942920   14016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-321200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (324.90s)

                                                
                                    

Test pass (288/327)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.69
4 TestDownloadOnly/v1.16.0/preload-exists 0.08
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.57
9 TestDownloadOnly/v1.16.0/DeleteAll 2.78
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 1.4
12 TestDownloadOnly/v1.28.4/json-events 7.67
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.28
18 TestDownloadOnly/v1.28.4/DeleteAll 2.36
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.27
21 TestDownloadOnly/v1.29.0-rc.2/json-events 7.08
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.28
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.83
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.1
29 TestDownloadOnlyKic 3.85
30 TestBinaryMirror 3.38
31 TestOffline 208.39
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.31
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.31
36 TestAddons/Setup 478.35
40 TestAddons/parallel/InspektorGadget 14.19
41 TestAddons/parallel/MetricsServer 7.4
42 TestAddons/parallel/HelmTiller 16.76
44 TestAddons/parallel/CSI 93.95
45 TestAddons/parallel/Headlamp 35.13
46 TestAddons/parallel/CloudSpanner 7.64
47 TestAddons/parallel/LocalPath 86.77
48 TestAddons/parallel/NvidiaDevicePlugin 6.85
49 TestAddons/parallel/Yakd 5.02
52 TestAddons/serial/GCPAuth/Namespaces 0.4
53 TestAddons/StoppedEnableDisable 14.52
54 TestCertOptions 80.08
55 TestCertExpiration 303.93
56 TestDockerFlags 80.98
57 TestForceSystemdFlag 90.29
58 TestForceSystemdEnv 82.47
65 TestErrorSpam/start 3.91
66 TestErrorSpam/status 3.91
67 TestErrorSpam/pause 3.9
68 TestErrorSpam/unpause 4.56
69 TestErrorSpam/stop 16.19
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 98.16
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 45.32
76 TestFunctional/serial/KubeContext 0.12
77 TestFunctional/serial/KubectlGetPods 0.25
80 TestFunctional/serial/CacheCmd/cache/add_remote 6.7
81 TestFunctional/serial/CacheCmd/cache/add_local 3.91
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.24
83 TestFunctional/serial/CacheCmd/cache/list 0.25
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.18
85 TestFunctional/serial/CacheCmd/cache/cache_reload 5.22
86 TestFunctional/serial/CacheCmd/cache/delete 0.52
87 TestFunctional/serial/MinikubeKubectlCmd 0.45
89 TestFunctional/serial/ExtraConfig 46.55
90 TestFunctional/serial/ComponentHealth 0.19
91 TestFunctional/serial/LogsCmd 2.59
92 TestFunctional/serial/LogsFileCmd 2.79
93 TestFunctional/serial/InvalidService 5.84
97 TestFunctional/parallel/DryRun 2.88
98 TestFunctional/parallel/InternationalLanguage 1.07
99 TestFunctional/parallel/StatusCmd 4.95
104 TestFunctional/parallel/AddonsCmd 0.7
105 TestFunctional/parallel/PersistentVolumeClaim 45.57
107 TestFunctional/parallel/SSHCmd 2.61
108 TestFunctional/parallel/CpCmd 8.6
109 TestFunctional/parallel/MySQL 78.07
110 TestFunctional/parallel/FileSync 1.24
111 TestFunctional/parallel/CertSync 7.37
115 TestFunctional/parallel/NodeLabels 0.19
117 TestFunctional/parallel/NonActiveRuntimeDisabled 1.11
119 TestFunctional/parallel/License 2.87
120 TestFunctional/parallel/ServiceCmd/DeployApp 21.48
121 TestFunctional/parallel/ProfileCmd/profile_not_create 1.92
122 TestFunctional/parallel/ProfileCmd/profile_list 1.9
123 TestFunctional/parallel/ProfileCmd/profile_json_output 1.92
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.47
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 36.05
129 TestFunctional/parallel/ServiceCmd/List 1.3
130 TestFunctional/parallel/ServiceCmd/JSONOutput 1.28
131 TestFunctional/parallel/ServiceCmd/HTTPS 15.02
132 TestFunctional/parallel/ServiceCmd/Format 15.02
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.17
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
139 TestFunctional/parallel/Version/short 0.35
140 TestFunctional/parallel/Version/components 2.62
141 TestFunctional/parallel/ImageCommands/ImageListShort 1.02
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.88
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.9
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.98
145 TestFunctional/parallel/ImageCommands/ImageBuild 7.98
146 TestFunctional/parallel/ImageCommands/Setup 3.76
147 TestFunctional/parallel/ServiceCmd/URL 15.04
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 10.04
149 TestFunctional/parallel/DockerEnv/powershell 8.77
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.4
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.68
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.71
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.7
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 16.43
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.5
156 TestFunctional/parallel/ImageCommands/ImageRemove 1.75
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 7.01
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 6.72
159 TestFunctional/delete_addon-resizer_images 0.48
160 TestFunctional/delete_my-image_image 0.18
161 TestFunctional/delete_minikube_cached_images 0.17
165 TestImageBuild/serial/Setup 66.59
166 TestImageBuild/serial/NormalBuild 3.8
167 TestImageBuild/serial/BuildWithBuildArg 2.5
168 TestImageBuild/serial/BuildWithDockerIgnore 2.03
169 TestImageBuild/serial/BuildWithSpecifiedDockerfile 2.75
177 TestJSONOutput/start/Command 82.05
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 1.78
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 1.52
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 7.7
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 1.35
202 TestKicCustomNetwork/create_custom_network 76.59
203 TestKicCustomNetwork/use_default_bridge_network 75.93
204 TestKicExistingNetwork 77.19
205 TestKicCustomSubnet 74.81
206 TestKicStaticIP 76.29
207 TestMainNoArgs 0.23
208 TestMinikubeProfile 144.91
211 TestMountStart/serial/StartWithMountFirst 20.14
212 TestMountStart/serial/VerifyMountFirst 1.13
213 TestMountStart/serial/StartWithMountSecond 18.64
214 TestMountStart/serial/VerifyMountSecond 1.04
215 TestMountStart/serial/DeleteFirst 3.95
216 TestMountStart/serial/VerifyMountPostDelete 1.1
217 TestMountStart/serial/Stop 2.49
218 TestMountStart/serial/RestartStopped 13.17
219 TestMountStart/serial/VerifyMountPostStop 1.05
222 TestMultiNode/serial/FreshStart2Nodes 158.71
223 TestMultiNode/serial/DeployApp2Nodes 23.48
224 TestMultiNode/serial/PingHostFrom2Pods 2.52
225 TestMultiNode/serial/AddNode 54.2
226 TestMultiNode/serial/MultiNodeLabels 0.19
227 TestMultiNode/serial/ProfileList 1.27
228 TestMultiNode/serial/CopyFile 39.64
229 TestMultiNode/serial/StopNode 6.53
230 TestMultiNode/serial/StartAfterStop 22.9
231 TestMultiNode/serial/RestartKeepsNodes 149.99
232 TestMultiNode/serial/DeleteNode 11.25
233 TestMultiNode/serial/StopMultiNode 25.18
234 TestMultiNode/serial/RestartMultiNode 78.58
235 TestMultiNode/serial/ValidateNameConflict 71.66
239 TestPreload 183.63
240 TestScheduledStopWindows 138.07
244 TestInsufficientStorage 49.34
245 TestRunningBinaryUpgrade 190.71
248 TestMissingContainerUpgrade 364.45
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
251 TestNoKubernetes/serial/StartWithK8s 166.77
252 TestNoKubernetes/serial/StartWithStopK8s 26.59
253 TestNoKubernetes/serial/Start 23.96
254 TestStoppedBinaryUpgrade/Setup 0.56
255 TestStoppedBinaryUpgrade/Upgrade 172.79
256 TestNoKubernetes/serial/VerifyK8sNotRunning 1.2
257 TestNoKubernetes/serial/ProfileList 4.5
258 TestNoKubernetes/serial/Stop 8.05
259 TestNoKubernetes/serial/StartNoArgs 13.2
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.26
269 TestPause/serial/Start 103
270 TestStoppedBinaryUpgrade/MinikubeLogs 6.58
282 TestPause/serial/SecondStartNoReconfiguration 49.41
283 TestPause/serial/Pause 2
284 TestPause/serial/VerifyStatus 1.38
285 TestPause/serial/Unpause 1.78
286 TestPause/serial/PauseAgain 2.18
287 TestPause/serial/DeletePaused 6.12
288 TestPause/serial/VerifyDeletedResources 3.75
292 TestStartStop/group/no-preload/serial/FirstStart 129.43
293 TestStartStop/group/no-preload/serial/DeployApp 10.74
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.69
295 TestStartStop/group/no-preload/serial/Stop 12.84
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.19
297 TestStartStop/group/no-preload/serial/SecondStart 400.69
299 TestStartStop/group/embed-certs/serial/FirstStart 92.52
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.03
302 TestStartStop/group/embed-certs/serial/DeployApp 10.72
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.59
304 TestStartStop/group/embed-certs/serial/Stop 12.71
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.75
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.06
307 TestStartStop/group/embed-certs/serial/SecondStart 347.11
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.74
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.68
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.09
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 354.04
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 39.03
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.4
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.89
317 TestStartStop/group/no-preload/serial/Pause 9.46
319 TestStartStop/group/newest-cni/serial/FirstStart 97.76
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 26.04
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 28.02
322 TestStartStop/group/old-k8s-version/serial/Stop 3.86
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.64
324 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.31
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 1.23
327 TestStartStop/group/embed-certs/serial/Pause 11.27
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.46
329 TestNetworkPlugins/group/auto/Start 104.39
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.16
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.16
334 TestStartStop/group/newest-cni/serial/Stop 9.24
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.22
336 TestStartStop/group/newest-cni/serial/SecondStart 54.26
337 TestNetworkPlugins/group/kindnet/Start 111.3
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.89
341 TestStartStop/group/newest-cni/serial/Pause 11.31
342 TestNetworkPlugins/group/auto/KubeletFlags 1.66
343 TestNetworkPlugins/group/auto/NetCatPod 19.7
344 TestNetworkPlugins/group/calico/Start 193.41
345 TestNetworkPlugins/group/auto/DNS 0.33
346 TestNetworkPlugins/group/auto/Localhost 0.31
347 TestNetworkPlugins/group/auto/HairPin 0.32
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.03
349 TestNetworkPlugins/group/kindnet/KubeletFlags 1.31
350 TestNetworkPlugins/group/kindnet/NetCatPod 18.75
351 TestNetworkPlugins/group/kindnet/DNS 0.35
352 TestNetworkPlugins/group/kindnet/Localhost 0.35
353 TestNetworkPlugins/group/kindnet/HairPin 0.31
354 TestNetworkPlugins/group/custom-flannel/Start 116.19
355 TestNetworkPlugins/group/false/Start 87.27
356 TestNetworkPlugins/group/calico/ControllerPod 6.02
357 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.27
358 TestNetworkPlugins/group/custom-flannel/NetCatPod 18.75
359 TestNetworkPlugins/group/calico/KubeletFlags 1.92
360 TestNetworkPlugins/group/calico/NetCatPod 19.05
361 TestNetworkPlugins/group/custom-flannel/DNS 0.34
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.31
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.31
364 TestNetworkPlugins/group/calico/DNS 0.34
365 TestNetworkPlugins/group/calico/Localhost 0.31
366 TestNetworkPlugins/group/calico/HairPin 0.3
367 TestNetworkPlugins/group/false/KubeletFlags 1.29
368 TestNetworkPlugins/group/false/NetCatPod 17.71
369 TestNetworkPlugins/group/false/DNS 0.39
370 TestNetworkPlugins/group/false/Localhost 0.33
371 TestNetworkPlugins/group/false/HairPin 0.32
372 TestNetworkPlugins/group/enable-default-cni/Start 105.5
373 TestNetworkPlugins/group/flannel/Start 113.39
374 TestNetworkPlugins/group/bridge/Start 104.41
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.26
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 17.82
377 TestNetworkPlugins/group/flannel/ControllerPod 6.02
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.35
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.34
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
381 TestNetworkPlugins/group/flannel/KubeletFlags 1.32
382 TestNetworkPlugins/group/flannel/NetCatPod 17.6
383 TestNetworkPlugins/group/flannel/DNS 0.38
384 TestNetworkPlugins/group/flannel/Localhost 0.33
385 TestNetworkPlugins/group/flannel/HairPin 0.32
386 TestNetworkPlugins/group/bridge/KubeletFlags 1.24
387 TestNetworkPlugins/group/bridge/NetCatPod 20.63
388 TestNetworkPlugins/group/bridge/DNS 0.35
389 TestNetworkPlugins/group/bridge/Localhost 0.35
390 TestNetworkPlugins/group/bridge/HairPin 0.32
391 TestNetworkPlugins/group/kubenet/Start 96.52
392 TestNetworkPlugins/group/kubenet/KubeletFlags 1.15
393 TestNetworkPlugins/group/kubenet/NetCatPod 17.64
394 TestNetworkPlugins/group/kubenet/DNS 0.33
395 TestNetworkPlugins/group/kubenet/Localhost 0.31
396 TestNetworkPlugins/group/kubenet/HairPin 0.28
x
+
TestDownloadOnly/v1.16.0/json-events (7.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-993300 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-993300 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (7.684846s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-993300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-993300: exit status 85 (569.237ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-993300 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC |          |
	|         | -p download-only-993300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 10:25:11
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 10:25:11.903248    2892 out.go:291] Setting OutFile to fd 640 ...
	I0226 10:25:11.905248    2892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:25:11.905248    2892 out.go:304] Setting ErrFile to fd 644...
	I0226 10:25:11.905248    2892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 10:25:11.919241    2892 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0226 10:25:11.931277    2892 out.go:298] Setting JSON to true
	I0226 10:25:11.934238    2892 start.go:129] hostinfo: {"hostname":"minikube7","uptime":388,"bootTime":1708942723,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 10:25:11.935238    2892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 10:25:11.941253    2892 out.go:97] [download-only-993300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 10:25:11.942102    2892 notify.go:220] Checking for updates...
	I0226 10:25:11.945182    2892 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	W0226 10:25:11.942102    2892 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0226 10:25:11.949269    2892 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 10:25:11.951193    2892 out.go:169] MINIKUBE_LOCATION=18222
	I0226 10:25:11.954682    2892 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0226 10:25:11.958938    2892 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 10:25:11.959178    2892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 10:25:12.257259    2892 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 10:25:12.266533    2892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:25:13.530534    2892 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2634256s)
	I0226 10:25:13.531829    2892 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:63 SystemTime:2024-02-26 10:25:13.494478958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:25:13.534497    2892 out.go:97] Using the docker driver based on user configuration
	I0226 10:25:13.534593    2892 start.go:299] selected driver: docker
	I0226 10:25:13.534670    2892 start.go:903] validating driver "docker" against <nil>
	I0226 10:25:13.552548    2892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:25:13.904590    2892 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:63 SystemTime:2024-02-26 10:25:13.867137529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:25:13.904590    2892 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 10:25:14.043657    2892 start_flags.go:394] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0226 10:25:14.043867    2892 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 10:25:14.048624    2892 out.go:169] Using Docker Desktop driver with root privileges
	I0226 10:25:14.050161    2892 cni.go:84] Creating CNI manager for ""
	I0226 10:25:14.050161    2892 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 10:25:14.050161    2892 start_flags.go:323] config:
	{Name:download-only-993300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-993300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:25:14.055858    2892 out.go:97] Starting control plane node download-only-993300 in cluster download-only-993300
	I0226 10:25:14.056041    2892 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 10:25:14.058055    2892 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 10:25:14.058055    2892 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 10:25:14.058055    2892 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 10:25:14.098321    2892 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 10:25:14.098321    2892 cache.go:56] Caching tarball of preloaded images
	I0226 10:25:14.099379    2892 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 10:25:14.103155    2892 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0226 10:25:14.103155    2892 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0226 10:25:14.164549    2892 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 10:25:14.242714    2892 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 10:25:14.243245    2892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708008208-17936@sha256_4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar
	I0226 10:25:14.243563    2892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708008208-17936@sha256_4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar
	I0226 10:25:14.243563    2892 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 10:25:14.245491    2892 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-993300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:25:19.615324   13952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (2.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.7782896s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (2.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-993300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-993300: (1.3991714s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (7.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-825200 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-825200 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker: (7.6689008s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (7.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-825200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-825200: exit status 85 (275.9357ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-993300 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC |                     |
	|         | -p download-only-993300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC | 26 Feb 24 10:25 UTC |
	| delete  | -p download-only-993300        | download-only-993300 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC | 26 Feb 24 10:25 UTC |
	| start   | -o=json --download-only        | download-only-825200 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC |                     |
	|         | -p download-only-825200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 10:25:24
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 10:25:24.431386    3744 out.go:291] Setting OutFile to fd 692 ...
	I0226 10:25:24.432359    3744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:25:24.432359    3744 out.go:304] Setting ErrFile to fd 636...
	I0226 10:25:24.432359    3744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:25:24.456358    3744 out.go:298] Setting JSON to true
	I0226 10:25:24.459354    3744 start.go:129] hostinfo: {"hostname":"minikube7","uptime":401,"bootTime":1708942723,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 10:25:24.459354    3744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 10:25:24.464356    3744 out.go:97] [download-only-825200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 10:25:24.464356    3744 notify.go:220] Checking for updates...
	I0226 10:25:24.467371    3744 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:25:24.469358    3744 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 10:25:24.472346    3744 out.go:169] MINIKUBE_LOCATION=18222
	I0226 10:25:24.474348    3744 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0226 10:25:24.478350    3744 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 10:25:24.479374    3744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 10:25:24.752668    3744 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 10:25:24.762092    3744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:25:25.112974    3744 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:63 SystemTime:2024-02-26 10:25:25.074928415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:25:25.649747    3744 out.go:97] Using the docker driver based on user configuration
	I0226 10:25:25.649747    3744 start.go:299] selected driver: docker
	I0226 10:25:25.649747    3744 start.go:903] validating driver "docker" against <nil>
	I0226 10:25:25.668992    3744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:25:26.011464    3744 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:63 SystemTime:2024-02-26 10:25:25.974423399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:25:26.012047    3744 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 10:25:26.062486    3744 start_flags.go:394] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0226 10:25:26.063924    3744 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 10:25:26.200265    3744 out.go:169] Using Docker Desktop driver with root privileges
	I0226 10:25:26.203447    3744 cni.go:84] Creating CNI manager for ""
	I0226 10:25:26.203911    3744 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 10:25:26.203911    3744 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0226 10:25:26.204070    3744 start_flags.go:323] config:
	{Name:download-only-825200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-825200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:25:26.207223    3744 out.go:97] Starting control plane node download-only-825200 in cluster download-only-825200
	I0226 10:25:26.207316    3744 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 10:25:26.209596    3744 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 10:25:26.209596    3744 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 10:25:26.209596    3744 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 10:25:26.251094    3744 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 10:25:26.251094    3744 cache.go:56] Caching tarball of preloaded images
	I0226 10:25:26.251631    3744 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 10:25:26.254579    3744 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0226 10:25:26.254579    3744 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0226 10:25:26.313056    3744 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 10:25:26.391365    3744 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 10:25:26.391365    3744 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708008208-17936@sha256_4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar
	I0226 10:25:26.391365    3744 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708008208-17936@sha256_4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar
	I0226 10:25:26.391365    3744 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 10:25:26.391365    3744 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0226 10:25:26.391365    3744 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0226 10:25:26.391904    3744 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-825200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:25:32.013441   13596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (2.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.3584015s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (2.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-825200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-825200: (1.2723609s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (7.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-543600 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-543600 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker: (7.080028s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (7.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-543600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-543600: exit status 85 (275.8388ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-993300 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC |                     |
	|         | -p download-only-993300           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=docker                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC | 26 Feb 24 10:25 UTC |
	| delete  | -p download-only-993300           | download-only-993300 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC | 26 Feb 24 10:25 UTC |
	| start   | -o=json --download-only           | download-only-825200 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC |                     |
	|         | -p download-only-825200           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=docker                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC | 26 Feb 24 10:25 UTC |
	| delete  | -p download-only-825200           | download-only-825200 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC | 26 Feb 24 10:25 UTC |
	| start   | -o=json --download-only           | download-only-543600 | minikube7\jenkins | v1.32.0 | 26 Feb 24 10:25 UTC |                     |
	|         | -p download-only-543600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=docker                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 10:25:35
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 10:25:35.990167    1208 out.go:291] Setting OutFile to fd 796 ...
	I0226 10:25:35.991346    1208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:25:35.991346    1208 out.go:304] Setting ErrFile to fd 800...
	I0226 10:25:35.991346    1208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:25:36.014752    1208 out.go:298] Setting JSON to true
	I0226 10:25:36.017723    1208 start.go:129] hostinfo: {"hostname":"minikube7","uptime":412,"bootTime":1708942723,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 10:25:36.017723    1208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 10:25:36.023187    1208 out.go:97] [download-only-543600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 10:25:36.023490    1208 notify.go:220] Checking for updates...
	I0226 10:25:36.025810    1208 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:25:36.036383    1208 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 10:25:36.039382    1208 out.go:169] MINIKUBE_LOCATION=18222
	I0226 10:25:36.042289    1208 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0226 10:25:36.047435    1208 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 10:25:36.047971    1208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 10:25:36.321161    1208 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 10:25:36.331063    1208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:25:36.679696    1208 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:63 SystemTime:2024-02-26 10:25:36.64255473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Index
ServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:D
ocker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings
:<nil>}}
	I0226 10:25:36.855234    1208 out.go:97] Using the docker driver based on user configuration
	I0226 10:25:36.855581    1208 start.go:299] selected driver: docker
	I0226 10:25:36.855581    1208 start.go:903] validating driver "docker" against <nil>
	I0226 10:25:36.873129    1208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:25:37.214525    1208 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:63 SystemTime:2024-02-26 10:25:37.177177871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:25:37.215210    1208 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 10:25:37.260818    1208 start_flags.go:394] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0226 10:25:37.262139    1208 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 10:25:37.360925    1208 out.go:169] Using Docker Desktop driver with root privileges
	I0226 10:25:37.363735    1208 cni.go:84] Creating CNI manager for ""
	I0226 10:25:37.364491    1208 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 10:25:37.364491    1208 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0226 10:25:37.364491    1208 start_flags.go:323] config:
	{Name:download-only-543600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-543600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:25:37.367385    1208 out.go:97] Starting control plane node download-only-543600 in cluster download-only-543600
	I0226 10:25:37.367468    1208 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 10:25:37.369731    1208 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 10:25:37.369837    1208 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 10:25:37.369837    1208 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 10:25:37.410949    1208 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0226 10:25:37.411003    1208 cache.go:56] Caching tarball of preloaded images
	I0226 10:25:37.411434    1208 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 10:25:37.414033    1208 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0226 10:25:37.414033    1208 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0226 10:25:37.488208    1208 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0226 10:25:37.543175    1208 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 10:25:37.543227    1208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708008208-17936@sha256_4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar
	I0226 10:25:37.543227    1208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708008208-17936@sha256_4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf.tar
	I0226 10:25:37.543227    1208 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 10:25:37.543227    1208 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0226 10:25:37.543227    1208 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0226 10:25:37.543768    1208 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0226 10:25:40.438717    1208 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0226 10:25:40.439775    1208 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-543600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:25:42.997632    6808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.83397s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-543600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-543600: (1.0971942s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.10s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.85s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-776500 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-776500 --alsologtostderr --driver=docker: (1.5083597s)
helpers_test.go:175: Cleaning up "download-docker-776500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-776500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-776500: (1.3897703s)
--- PASS: TestDownloadOnlyKic (3.85s)

                                                
                                    
x
+
TestBinaryMirror (3.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-019000 --alsologtostderr --binary-mirror http://127.0.0.1:50690 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-019000 --alsologtostderr --binary-mirror http://127.0.0.1:50690 --driver=docker: (1.7671073s)
helpers_test.go:175: Cleaning up "binary-mirror-019000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-019000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-019000: (1.3858242s)
--- PASS: TestBinaryMirror (3.38s)

                                                
                                    
x
+
TestOffline (208.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-487700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-487700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m22.713462s)
helpers_test.go:175: Cleaning up "offline-docker-487700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-487700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-487700: (5.6781195s)
--- PASS: TestOffline (208.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-559100
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-559100: exit status 85 (306.4665ms)

                                                
                                                
-- stdout --
	* Profile "addons-559100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-559100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:25:57.286440   10016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-559100
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-559100: exit status 85 (307.512ms)

                                                
                                                
-- stdout --
	* Profile "addons-559100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-559100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:25:57.279677   11060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/Setup (478.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-559100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-559100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m58.3448479s)
--- PASS: TestAddons/Setup (478.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (14.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qvz7p" [6e1efba5-d61a-47db-8395-c56093b6790a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0415552s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-559100
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-559100: (9.1425018s)
--- PASS: TestAddons/parallel/InspektorGadget (14.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 22.8268ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-xpl96" [d77324d3-0e2f-4397-a3b6-306d2f6da8d7] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0231583s
addons_test.go:415: (dbg) Run:  kubectl --context addons-559100 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-559100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-559100 addons disable metrics-server --alsologtostderr -v=1: (2.1426077s)
--- PASS: TestAddons/parallel/MetricsServer (7.40s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.76s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 22.8268ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-cx4pp" [60c8a096-c47a-4e4e-bb71-7185e2176e21] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0222392s
addons_test.go:473: (dbg) Run:  kubectl --context addons-559100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-559100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.584634s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-559100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-559100 addons disable helm-tiller --alsologtostderr -v=1: (2.0889865s)
--- PASS: TestAddons/parallel/HelmTiller (16.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (93.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.7156ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-559100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-559100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [43864fe8-c5ef-4729-b12f-b4ab74eb7085] Pending
helpers_test.go:344: "task-pv-pod" [43864fe8-c5ef-4729-b12f-b4ab74eb7085] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [43864fe8-c5ef-4729-b12f-b4ab74eb7085] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 32.040006s
addons_test.go:584: (dbg) Run:  kubectl --context addons-559100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-559100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-559100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-559100 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-559100 delete pod task-pv-pod: (3.8630065s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-559100 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-559100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-559100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9390bf86-64d5-4372-9217-9b3ecfa746fd] Pending
helpers_test.go:344: "task-pv-pod-restore" [9390bf86-64d5-4372-9217-9b3ecfa746fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9390bf86-64d5-4372-9217-9b3ecfa746fd] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0225333s
addons_test.go:626: (dbg) Run:  kubectl --context addons-559100 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-559100 delete pod task-pv-pod-restore: (1.4124937s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-559100 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-559100 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-559100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-559100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.4748028s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-559100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-559100 addons disable volumesnapshots --alsologtostderr -v=1: (2.4851772s)
--- PASS: TestAddons/parallel/CSI (93.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-559100 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-559100 --alsologtostderr -v=1: (4.1079332s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-zbgz9" [9c408cbc-ac67-43e2-a2e1-462cdab6c593] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-zbgz9" [9c408cbc-ac67-43e2-a2e1-462cdab6c593] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-zbgz9" [9c408cbc-ac67-43e2-a2e1-462cdab6c593] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 31.0189758s
--- PASS: TestAddons/parallel/Headlamp (35.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-cc9hk" [814faa1d-17d8-4f97-ae0b-4a895b9d4a81] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0201921s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-559100
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-559100: (2.5548407s)
--- PASS: TestAddons/parallel/CloudSpanner (7.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (86.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-559100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-559100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4e299ea1-3481-49e1-8507-2c66f8021b05] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4e299ea1-3481-49e1-8507-2c66f8021b05] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4e299ea1-3481-49e1-8507-2c66f8021b05] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 28.0239562s
addons_test.go:891: (dbg) Run:  kubectl --context addons-559100 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-559100 ssh "cat /opt/local-path-provisioner/pvc-3b141ccc-1995-4ca5-b680-3175d4dc9798_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-559100 ssh "cat /opt/local-path-provisioner/pvc-3b141ccc-1995-4ca5-b680-3175d4dc9798_default_test-pvc/file1": (1.3046011s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-559100 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-559100 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-559100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-559100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (47.3437296s)
--- PASS: TestAddons/parallel/LocalPath (86.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-slfp7" [e2ae6f9c-5b29-4dd9-ba42-cdc2e0e8a8b5] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0232244s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-559100
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-559100: (1.8241902s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-wdb4s" [6e8d4040-0dbb-4251-bbf7-c022eee6806c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.015127s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-559100 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-559100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-559100
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-559100: (12.8497143s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-559100
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-559100
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-559100
--- PASS: TestAddons/StoppedEnableDisable (14.52s)

                                                
                                    
x
+
TestCertOptions (80.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-380200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-380200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m10.9850202s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-380200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-380200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.2932405s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-380200 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-380200 -- "sudo cat /etc/kubernetes/admin.conf": (1.2317124s)
helpers_test.go:175: Cleaning up "cert-options-380200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-380200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-380200: (6.3548641s)
--- PASS: TestCertOptions (80.08s)

                                                
                                    
x
+
TestCertExpiration (303.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-720300 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-720300 --memory=2048 --cert-expiration=3m --driver=docker: (1m17.5873119s)
E0226 11:36:45.619202   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-720300 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-720300 --memory=2048 --cert-expiration=8760h --driver=docker: (39.6250481s)
helpers_test.go:175: Cleaning up "cert-expiration-720300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-720300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-720300: (6.7048578s)
--- PASS: TestCertExpiration (303.93s)

                                                
                                    
x
+
TestDockerFlags (80.98s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-256600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-256600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m12.4722272s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-256600 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-256600 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.3549057s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-256600 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-256600 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.3062112s)
helpers_test.go:175: Cleaning up "docker-flags-256600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-256600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-256600: (5.8456192s)
--- PASS: TestDockerFlags (80.98s)

                                                
                                    
x
+
TestForceSystemdFlag (90.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-445300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-445300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m22.9423353s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-445300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-445300 ssh "docker info --format {{.CgroupDriver}}": (1.3253026s)
helpers_test.go:175: Cleaning up "force-systemd-flag-445300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-445300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-445300: (6.0266848s)
--- PASS: TestForceSystemdFlag (90.29s)

                                                
                                    
x
+
TestForceSystemdEnv (82.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-784500 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-784500 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m14.9580351s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-784500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-784500 ssh "docker info --format {{.CgroupDriver}}": (1.3270109s)
helpers_test.go:175: Cleaning up "force-systemd-env-784500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-784500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-784500: (6.1858356s)
--- PASS: TestForceSystemdEnv (82.47s)

                                                
                                    
x
+
TestErrorSpam/start (3.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 start --dry-run: (1.3631022s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 start --dry-run: (1.2349313s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 start --dry-run: (1.3107323s)
--- PASS: TestErrorSpam/start (3.91s)

                                                
                                    
x
+
TestErrorSpam/status (3.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 status: (1.2801312s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 status: (1.2714209s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 status: (1.3569177s)
--- PASS: TestErrorSpam/status (3.91s)

                                                
                                    
x
+
TestErrorSpam/pause (3.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 pause: (1.5843801s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 pause: (1.1337054s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 pause: (1.1812423s)
--- PASS: TestErrorSpam/pause (3.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 unpause: (1.3664341s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 unpause: (1.3574749s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 unpause: (1.8342146s)
--- PASS: TestErrorSpam/unpause (4.56s)

                                                
                                    
x
+
TestErrorSpam/stop (16.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 stop: (7.2212211s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 stop: (4.6178155s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-614200 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-614200 stop: (4.3430292s)
--- PASS: TestErrorSpam/stop (16.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\11868\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-366900 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0226 10:38:55.831206   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:55.846661   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:55.860925   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:55.892832   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:55.938194   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:56.033156   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:56.207848   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:56.538528   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:57.188719   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:38:58.481373   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:39:01.056307   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:39:06.182811   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:39:16.437600   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-366900 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m38.152926s)
--- PASS: TestFunctional/serial/StartWithProxy (98.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-366900 --alsologtostderr -v=8
E0226 10:39:36.926979   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 10:40:17.888878   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-366900 --alsologtostderr -v=8: (45.3164245s)
functional_test.go:659: soft start took 45.318669s for "functional-366900" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.12s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-366900 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 cache add registry.k8s.io/pause:3.1: (2.3182186s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 cache add registry.k8s.io/pause:3.3: (2.1845245s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 cache add registry.k8s.io/pause:latest: (2.1971652s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-366900 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2734521906\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-366900 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2734521906\001: (1.7160111s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cache add minikube-local-cache-test:functional-366900
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 cache add minikube-local-cache-test:functional-366900: (1.7326443s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cache delete minikube-local-cache-test:functional-366900
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-366900
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh sudo crictl images: (1.1746855s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh sudo docker rmi registry.k8s.io/pause:latest: (1.1533025s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-366900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1.1788083s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:40:35.749077    8764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 cache reload: (1.7388984s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (1.151299s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (5.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 kubectl -- --context functional-366900 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-366900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-366900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.5483792s)
functional_test.go:757: restart took 46.5483792s for "functional-366900" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-366900 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 logs: (2.5882294s)
--- PASS: TestFunctional/serial/LogsCmd (2.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3782536981\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3782536981\001\logs.txt: (2.7815227s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-366900 apply -f testdata\invalidsvc.yaml
E0226 10:41:39.822202   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-366900
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-366900: exit status 115 (1.5644936s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32546 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:41:42.836635   13956 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_service_c9bf6787273d25f6c9d72c0b156373dea6a4fe44_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-366900 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-366900 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-366900 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.0699369s)

                                                
                                                
-- stdout --
	* [functional-366900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:41:52.815327   10696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 10:41:52.900323   10696 out.go:291] Setting OutFile to fd 1016 ...
	I0226 10:41:52.901339   10696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:41:52.901339   10696 out.go:304] Setting ErrFile to fd 872...
	I0226 10:41:52.901339   10696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:41:52.928317   10696 out.go:298] Setting JSON to false
	I0226 10:41:52.932333   10696 start.go:129] hostinfo: {"hostname":"minikube7","uptime":1389,"bootTime":1708942723,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 10:41:52.932333   10696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 10:41:52.937350   10696 out.go:177] * [functional-366900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 10:41:52.939324   10696 notify.go:220] Checking for updates...
	I0226 10:41:52.941342   10696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:41:52.944335   10696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 10:41:52.946330   10696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 10:41:52.949320   10696 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 10:41:52.951315   10696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 10:41:52.954318   10696 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 10:41:52.955333   10696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 10:41:53.274302   10696 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 10:41:53.286039   10696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:41:53.661671   10696 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:78 SystemTime:2024-02-26 10:41:53.618061709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:41:53.665668   10696 out.go:177] * Using the docker driver based on existing profile
	I0226 10:41:53.668669   10696 start.go:299] selected driver: docker
	I0226 10:41:53.668669   10696 start.go:903] validating driver "docker" against &{Name:functional-366900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-366900 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:41:53.668669   10696 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 10:41:53.717684   10696 out.go:177] 
	W0226 10:41:53.720670   10696 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0226 10:41:53.722692   10696 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-366900 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:987: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-366900 --dry-run --alsologtostderr -v=1 --driver=docker: (1.8116724s)
--- PASS: TestFunctional/parallel/DryRun (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-366900 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-366900 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.0656506s)

                                                
                                                
-- stdout --
	* [functional-366900] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:41:50.996124    1640 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 10:41:51.073360    1640 out.go:291] Setting OutFile to fd 764 ...
	I0226 10:41:51.074361    1640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:41:51.074361    1640 out.go:304] Setting ErrFile to fd 780...
	I0226 10:41:51.074361    1640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 10:41:51.094367    1640 out.go:298] Setting JSON to false
	I0226 10:41:51.097356    1640 start.go:129] hostinfo: {"hostname":"minikube7","uptime":1387,"bootTime":1708942723,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0226 10:41:51.097356    1640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 10:41:51.100363    1640 out.go:177] * [functional-366900] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0226 10:41:51.104364    1640 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0226 10:41:51.104364    1640 notify.go:220] Checking for updates...
	I0226 10:41:51.107363    1640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 10:41:51.109355    1640 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0226 10:41:51.112359    1640 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 10:41:51.114364    1640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 10:41:51.117365    1640 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 10:41:51.119365    1640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 10:41:51.434579    1640 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 10:41:51.443582    1640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 10:41:51.808902    1640 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:78 SystemTime:2024-02-26 10:41:51.770327825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657511936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0226 10:41:51.814900    1640 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0226 10:41:51.816900    1640 start.go:299] selected driver: docker
	I0226 10:41:51.816900    1640 start.go:903] validating driver "docker" against &{Name:functional-366900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-366900 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 10:41:51.816900    1640 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 10:41:51.881930    1640 out.go:177] 
	W0226 10:41:51.884897    1640 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0226 10:41:51.887896    1640 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (4.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 status: (1.4463694s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.6923668s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 status -o json: (1.8061817s)
--- PASS: TestFunctional/parallel/StatusCmd (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8af976fb-796a-4d3b-a3db-c54011d75859] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0159342s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-366900 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-366900 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-366900 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-366900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7640833e-82ba-4cf7-b7be-9a4ff549fe55] Pending
helpers_test.go:344: "sp-pod" [7640833e-82ba-4cf7-b7be-9a4ff549fe55] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7640833e-82ba-4cf7-b7be-9a4ff549fe55] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.0115716s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-366900 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-366900 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-366900 delete -f testdata/storage-provisioner/pod.yaml: (1.2781623s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-366900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1bd6d2b3-b8a8-4246-a653-faae3cf7d52b] Pending
helpers_test.go:344: "sp-pod" [1bd6d2b3-b8a8-4246-a653-faae3cf7d52b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1bd6d2b3-b8a8-4246-a653-faae3cf7d52b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0202304s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-366900 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "echo hello": (1.2919198s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "cat /etc/hostname": (1.3217486s)
--- PASS: TestFunctional/parallel/SSHCmd (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.0569716s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh -n functional-366900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh -n functional-366900 "sudo cat /home/docker/cp-test.txt": (1.4886426s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cp functional-366900:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd514817653\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 cp functional-366900:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd514817653\001\cp-test.txt: (1.6132797s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh -n functional-366900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh -n functional-366900 "sudo cat /home/docker/cp-test.txt": (1.5838087s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (1.3948573s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh -n functional-366900 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh -n functional-366900 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.463164s)
--- PASS: TestFunctional/parallel/CpCmd (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (78.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-366900 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-fwtmc" [19064fb5-49fd-4400-8812-ca63116c67ac] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-fwtmc" [19064fb5-49fd-4400-8812-ca63116c67ac] Running
E0226 10:43:55.833901   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m2.0158958s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;": exit status 1 (283.8031ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;": exit status 1 (307.0217ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;": exit status 1 (312.6674ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;": exit status 1 (327.3534ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;": exit status 1 (322.8283ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366900 exec mysql-859648c796-fwtmc -- mysql -ppassword -e "show databases;"
E0226 10:44:23.672998   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (78.07s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11868/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/test/nested/copy/11868/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/test/nested/copy/11868/hosts": (1.2350901s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (7.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11868.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/ssl/certs/11868.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/ssl/certs/11868.pem": (1.1463907s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11868.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /usr/share/ca-certificates/11868.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /usr/share/ca-certificates/11868.pem": (1.2571503s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.2310107s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/118682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/ssl/certs/118682.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/ssl/certs/118682.pem": (1.4131519s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/118682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /usr/share/ca-certificates/118682.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /usr/share/ca-certificates/118682.pem": (1.1810939s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.1418224s)
--- PASS: TestFunctional/parallel/CertSync (7.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-366900 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-366900 ssh "sudo systemctl is-active crio": exit status 1 (1.1102946s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:42:35.410328   13552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.8637054s)
--- PASS: TestFunctional/parallel/License (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-366900 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-366900 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-7x69n" [5709b5c2-3c08-4691-a5ca-0bc17f1510a9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-7x69n" [5709b5c2-3c08-4691-a5ca-0bc17f1510a9] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.009711s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.3839558s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.5743343s)
functional_test.go:1311: Took "1.5763025s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "323.956ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.6476175s)
functional_test.go:1362: Took "1.6476175s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "270.1345ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-366900 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-366900 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-366900 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1620: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4948: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-366900 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-366900 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (36.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-366900 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Done: kubectl --context functional-366900 apply -f testdata\testsvc.yaml: (1.0083139s)
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [26958bbc-92c7-4edc-b920-086da5c6d1a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [26958bbc-92c7-4edc-b920-086da5c6d1a4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 35.0164204s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (36.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 service list: (1.2969141s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 service list -o json: (1.2761346s)
functional_test.go:1490: Took "1.2761346s" to run "out/minikube-windows-amd64.exe -p functional-366900 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-366900 service --namespace=default --https --url hello-node: exit status 1 (15.0157793s)

                                                
                                                
-- stdout --
	https://127.0.0.1:51727

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:42:09.281243    5548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:51727
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-366900 service hello-node --url --format={{.IP}}: exit status 1 (15.0224986s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:42:24.311115    6356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-366900 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-366900 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3800: TerminateProcess: Access is denied.
helpers_test.go:508: unable to kill pid 14176: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 version --short
--- PASS: TestFunctional/parallel/Version/short (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 version -o=json --components: (2.622684s)
--- PASS: TestFunctional/parallel/Version/components (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 image ls --format short --alsologtostderr: (1.0208584s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-366900 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-366900
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-366900
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-366900 image ls --format short --alsologtostderr:
W0226 10:43:34.143378    7840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0226 10:43:34.239000    7840 out.go:291] Setting OutFile to fd 816 ...
I0226 10:43:34.250003    7840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:34.250003    7840 out.go:304] Setting ErrFile to fd 964...
I0226 10:43:34.250003    7840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:34.262997    7840 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:34.262997    7840 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:34.280996    7840 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
I0226 10:43:34.454943    7840 ssh_runner.go:195] Run: systemctl --version
I0226 10:43:34.463956    7840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
I0226 10:43:34.646420    7840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
I0226 10:43:34.927086    7840 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-366900 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/google-containers/addon-resizer      | functional-366900 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-366900 | ae881c0931128 | 30B    |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-366900 image ls --format table --alsologtostderr:
W0226 10:43:36.066604    8496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0226 10:43:36.155036    8496 out.go:291] Setting OutFile to fd 688 ...
I0226 10:43:36.155036    8496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:36.155036    8496 out.go:304] Setting ErrFile to fd 872...
I0226 10:43:36.155036    8496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:36.170172    8496 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:36.170795    8496 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:36.190008    8496 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
I0226 10:43:36.385187    8496 ssh_runner.go:195] Run: systemctl --version
I0226 10:43:36.394381    8496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
I0226 10:43:36.584214    8496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
I0226 10:43:36.723711    8496 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-366900 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-366900"],"size":"32900000"},{"id":"ae881c09311288416e951f5a7f81f5217955cdba71986784f7ee6825f7aeab4e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-366900"],"size":"30"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c
8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTag
s":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-366900 image ls --format json --alsologtostderr:
W0226 10:43:35.157209   13608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0226 10:43:35.265470   13608 out.go:291] Setting OutFile to fd 720 ...
I0226 10:43:35.266056   13608 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:35.266056   13608 out.go:304] Setting ErrFile to fd 904...
I0226 10:43:35.266056   13608 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:35.283824   13608 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:35.283824   13608 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:35.303171   13608 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
I0226 10:43:35.492871   13608 ssh_runner.go:195] Run: systemctl --version
I0226 10:43:35.500863   13608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
I0226 10:43:35.698191   13608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
I0226 10:43:35.859992   13608 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-366900 image ls --format yaml --alsologtostderr:
- id: ae881c09311288416e951f5a7f81f5217955cdba71986784f7ee6825f7aeab4e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-366900
size: "30"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-366900
size: "32900000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-366900 image ls --format yaml --alsologtostderr:
W0226 10:43:34.137732    4688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0226 10:43:34.234991    4688 out.go:291] Setting OutFile to fd 896 ...
I0226 10:43:34.236006    4688 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:34.236006    4688 out.go:304] Setting ErrFile to fd 800...
I0226 10:43:34.236006    4688 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:34.250003    4688 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:34.251022    4688 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:34.268999    4688 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
I0226 10:43:34.467941    4688 ssh_runner.go:195] Run: systemctl --version
I0226 10:43:34.480648    4688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
I0226 10:43:34.659000    4688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
I0226 10:43:34.926115    4688 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-366900 ssh pgrep buildkitd: exit status 1 (1.2415874s)

                                                
                                                
** stderr ** 
	W0226 10:43:35.128768   10848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image build -t localhost/my-image:functional-366900 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 image build -t localhost/my-image:functional-366900 testdata\build --alsologtostderr: (5.8314273s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-366900 image build -t localhost/my-image:functional-366900 testdata\build --alsologtostderr:
W0226 10:43:36.358760    1064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0226 10:43:36.458770    1064 out.go:291] Setting OutFile to fd 476 ...
I0226 10:43:36.471767    1064 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:36.471767    1064 out.go:304] Setting ErrFile to fd 896...
I0226 10:43:36.471767    1064 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 10:43:36.486766    1064 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:36.504852    1064 config.go:182] Loaded profile config "functional-366900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 10:43:36.523313    1064 cli_runner.go:164] Run: docker container inspect functional-366900 --format={{.State.Status}}
I0226 10:43:36.724712    1064 ssh_runner.go:195] Run: systemctl --version
I0226 10:43:36.737005    1064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-366900
I0226 10:43:36.913334    1064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51485 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-366900\id_rsa Username:docker}
I0226 10:43:37.041461    1064 build_images.go:151] Building image from path: C:\Users\jenkins.minikube7\AppData\Local\Temp\build.1370258154.tar
I0226 10:43:37.056890    1064 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0226 10:43:37.097954    1064 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1370258154.tar
I0226 10:43:37.108831    1064 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1370258154.tar: stat -c "%s %y" /var/lib/minikube/build/build.1370258154.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1370258154.tar': No such file or directory
I0226 10:43:37.108831    1064 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\AppData\Local\Temp\build.1370258154.tar --> /var/lib/minikube/build/build.1370258154.tar (3072 bytes)
I0226 10:43:37.171044    1064 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1370258154
I0226 10:43:37.205029    1064 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1370258154 -xf /var/lib/minikube/build/build.1370258154.tar
I0226 10:43:37.236084    1064 docker.go:360] Building image: /var/lib/minikube/build/build.1370258154
I0226 10:43:37.246297    1064 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-366900 /var/lib/minikube/build/build.1370258154
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:2c672ef1bc28d18a318021988a770628ffec82085d702195dba72fc038a8b6a3 done
#8 naming to localhost/my-image:functional-366900 0.0s done
#8 DONE 0.2s
I0226 10:43:41.938723    1064 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-366900 /var/lib/minikube/build/build.1370258154: (4.6924124s)
I0226 10:43:41.958415    1064 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1370258154
I0226 10:43:42.002391    1064 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1370258154.tar
I0226 10:43:42.043402    1064 build_images.go:207] Built localhost/my-image:functional-366900 from C:\Users\jenkins.minikube7\AppData\Local\Temp\build.1370258154.tar
I0226 10:43:42.043531    1064 build_images.go:123] succeeded building to: functional-366900
I0226 10:43:42.043658    1064 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.536577s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-366900
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-366900 service hello-node --url: exit status 1 (15.0385114s)

                                                
                                                
-- stdout --
	http://127.0.0.1:51788

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 10:42:39.326443    5748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:51788
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image load --daemon gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 image load --daemon gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr: (9.1665769s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (8.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-366900 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-366900"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-366900 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-366900": (5.105914s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-366900 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-366900 docker-env | Invoke-Expression ; docker images": (3.6550063s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (8.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image load --daemon gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 image load --daemon gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr: (5.5311915s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.224748s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-366900
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image load --daemon gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 image load --daemon gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr: (11.0933821s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image save gcr.io/google-containers/addon-resizer:functional-366900 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 image save gcr.io/google-containers/addon-resizer:functional-366900 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (5.5018463s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image rm gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (6.1090516s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-366900
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-366900 image save --daemon gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-366900 image save --daemon gcr.io/google-containers/addon-resizer:functional-366900 --alsologtostderr: (6.3457869s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-366900
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.72s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-366900
--- PASS: TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-366900
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-366900
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (66.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-522500 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-522500 --driver=docker: (1m6.5867241s)
--- PASS: TestImageBuild/serial/Setup (66.59s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-522500
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-522500: (3.8042518s)
--- PASS: TestImageBuild/serial/NormalBuild (3.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-522500
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-522500: (2.4989682s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (2.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-522500
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-522500: (2.0256214s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (2.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-522500
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-522500: (2.7544732s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-671900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0226 10:58:55.849585   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-671900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m22.0543932s)
--- PASS: TestJSONOutput/start/Command (82.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-671900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-671900 --output=json --user=testUser: (1.7837757s)
--- PASS: TestJSONOutput/pause/Command (1.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-671900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-671900 --output=json --user=testUser: (1.5244385s)
--- PASS: TestJSONOutput/unpause/Command (1.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-671900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-671900 --output=json --user=testUser: (7.7040435s)
--- PASS: TestJSONOutput/stop/Command (7.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-015200 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-015200 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (253.9337ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4143ef66-e832-4386-a5e4-3d362d5d8007","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-015200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0a2da04-4315-4f2c-92c3-82749967ed95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"6a6f286e-c857-4f47-9e88-0564b017ad4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9e2c7618-bb5d-4b86-9e40-5c8caed45e17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"fd315675-ebdc-4969-a16a-f69bf03d24f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18222"}}
	{"specversion":"1.0","id":"a3eead19-a68c-4ceb-9ac4-0565f5f1afc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cb64db3b-6cae-4314-88ad-2dd8bed5568e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:00:08.890299    1684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-015200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-015200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-015200: (1.0935311s)
--- PASS: TestErrorJSONOutput (1.35s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (76.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-451600 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-451600 --network=: (1m11.4746527s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-451600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-451600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-451600: (4.9292233s)
--- PASS: TestKicCustomNetwork/create_custom_network (76.59s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (75.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-032800 --network=bridge
E0226 11:01:45.610826   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-032800 --network=bridge: (1m11.3075362s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-032800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-032800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-032800: (4.4350555s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (75.93s)

                                                
                                    
x
+
TestKicExistingNetwork (77.19s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-371700 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-371700 --network=existing-network: (1m11.4586105s)
helpers_test.go:175: Cleaning up "existing-network-371700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-371700
E0226 11:03:55.847711   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-371700: (4.5930239s)
--- PASS: TestKicExistingNetwork (77.19s)

                                                
                                    
x
+
TestKicCustomSubnet (74.81s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-220100 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-220100 --subnet=192.168.60.0/24: (1m9.3777257s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-220100 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-220100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-220100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-220100: (5.2482067s)
--- PASS: TestKicCustomSubnet (74.81s)

                                                
                                    
x
+
TestKicStaticIP (76.29s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-808800 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-808800 --static-ip=192.168.200.200: (1m10.4368909s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-808800 ip
helpers_test.go:175: Cleaning up "static-ip-808800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-808800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-808800: (5.2519489s)
--- PASS: TestKicStaticIP (76.29s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (144.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-836400 --driver=docker
E0226 11:06:45.604429   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-836400 --driver=docker: (1m8.5219171s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-836400 --driver=docker
E0226 11:08:08.770867   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-836400 --driver=docker: (1m0.9396315s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-836400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.9677209s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-836400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.9730677s)
helpers_test.go:175: Cleaning up "second-836400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-836400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-836400: (5.4673281s)
helpers_test.go:175: Cleaning up "first-836400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-836400
E0226 11:08:55.842543   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-836400: (5.199644s)
--- PASS: TestMinikubeProfile (144.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-570700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-570700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (19.1276011s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.13s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-570700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-570700 ssh -- ls /minikube-host: (1.1288682s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-570700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-570700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (17.6394265s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.04s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-570700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-570700 ssh -- ls /minikube-host: (1.04031s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.04s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (3.95s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-570700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-570700 --alsologtostderr -v=5: (3.9447749s)
--- PASS: TestMountStart/serial/DeleteFirst (3.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.1s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-570700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-570700 ssh -- ls /minikube-host: (1.0991262s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.10s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.49s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-570700
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-570700: (2.4861987s)
--- PASS: TestMountStart/serial/Stop (2.49s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (13.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-570700
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-570700: (12.1562552s)
--- PASS: TestMountStart/serial/RestartStopped (13.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.05s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-570700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-570700 ssh -- ls /minikube-host: (1.0487911s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (158.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-718300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0226 11:11:45.607697   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 11:11:59.064839   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-718300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m36.6193175s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr: (2.0869213s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (158.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (23.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- rollout status deployment/busybox: (16.4093799s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-tbg24 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-tbg24 -- nslookup kubernetes.io: (1.8745086s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-v5z25 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-v5z25 -- nslookup kubernetes.io: (1.5509117s)
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-tbg24 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-v5z25 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-tbg24 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-v5z25 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (23.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-tbg24 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-tbg24 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-v5z25 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-718300 -- exec busybox-5b5d89c9d6-v5z25 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.52s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-718300 -v 3 --alsologtostderr
E0226 11:13:55.840353   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-718300 -v 3 --alsologtostderr: (51.2152565s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr: (2.9830878s)
--- PASS: TestMultiNode/serial/AddNode (54.20s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-718300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.268156s)
--- PASS: TestMultiNode/serial/ProfileList (1.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (39.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 status --output json --alsologtostderr: (2.7865063s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp testdata\cp-test.txt multinode-718300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp testdata\cp-test.txt multinode-718300:/home/docker/cp-test.txt: (1.1233381s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test.txt": (1.133959s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4060475780\001\cp-test_multinode-718300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4060475780\001\cp-test_multinode-718300.txt: (1.1078074s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test.txt": (1.1552101s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300:/home/docker/cp-test.txt multinode-718300-m02:/home/docker/cp-test_multinode-718300_multinode-718300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300:/home/docker/cp-test.txt multinode-718300-m02:/home/docker/cp-test_multinode-718300_multinode-718300-m02.txt: (1.6064632s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test.txt": (1.0992899s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test_multinode-718300_multinode-718300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test_multinode-718300_multinode-718300-m02.txt": (1.1157884s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300:/home/docker/cp-test.txt multinode-718300-m03:/home/docker/cp-test_multinode-718300_multinode-718300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300:/home/docker/cp-test.txt multinode-718300-m03:/home/docker/cp-test_multinode-718300_multinode-718300-m03.txt: (1.6852951s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test.txt": (1.1461946s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test_multinode-718300_multinode-718300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test_multinode-718300_multinode-718300-m03.txt": (1.1422339s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp testdata\cp-test.txt multinode-718300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp testdata\cp-test.txt multinode-718300-m02:/home/docker/cp-test.txt: (1.0988409s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test.txt": (1.1403651s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4060475780\001\cp-test_multinode-718300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4060475780\001\cp-test_multinode-718300-m02.txt: (1.1256432s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test.txt": (1.1218644s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m02:/home/docker/cp-test.txt multinode-718300:/home/docker/cp-test_multinode-718300-m02_multinode-718300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m02:/home/docker/cp-test.txt multinode-718300:/home/docker/cp-test_multinode-718300-m02_multinode-718300.txt: (1.6301638s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test.txt": (1.1500681s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test_multinode-718300-m02_multinode-718300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test_multinode-718300-m02_multinode-718300.txt": (1.1295192s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m02:/home/docker/cp-test.txt multinode-718300-m03:/home/docker/cp-test_multinode-718300-m02_multinode-718300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m02:/home/docker/cp-test.txt multinode-718300-m03:/home/docker/cp-test_multinode-718300-m02_multinode-718300-m03.txt: (1.6825692s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test.txt": (1.127106s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test_multinode-718300-m02_multinode-718300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test_multinode-718300-m02_multinode-718300-m03.txt": (1.1151999s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp testdata\cp-test.txt multinode-718300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp testdata\cp-test.txt multinode-718300-m03:/home/docker/cp-test.txt: (1.1330551s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test.txt": (1.0904734s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4060475780\001\cp-test_multinode-718300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4060475780\001\cp-test_multinode-718300-m03.txt: (1.1039906s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test.txt": (1.0811217s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m03:/home/docker/cp-test.txt multinode-718300:/home/docker/cp-test_multinode-718300-m03_multinode-718300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m03:/home/docker/cp-test.txt multinode-718300:/home/docker/cp-test_multinode-718300-m03_multinode-718300.txt: (1.6966764s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test.txt": (1.1055929s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test_multinode-718300-m03_multinode-718300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300 "sudo cat /home/docker/cp-test_multinode-718300-m03_multinode-718300.txt": (1.1224532s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m03:/home/docker/cp-test.txt multinode-718300-m02:/home/docker/cp-test_multinode-718300-m03_multinode-718300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 cp multinode-718300-m03:/home/docker/cp-test.txt multinode-718300-m02:/home/docker/cp-test_multinode-718300-m03_multinode-718300-m02.txt: (1.6287417s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m03 "sudo cat /home/docker/cp-test.txt": (1.1258829s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test_multinode-718300-m03_multinode-718300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 ssh -n multinode-718300-m02 "sudo cat /home/docker/cp-test_multinode-718300-m03_multinode-718300-m02.txt": (1.1108599s)
--- PASS: TestMultiNode/serial/CopyFile (39.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (6.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 node stop m03: (2.1144285s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-718300 status: exit status 7 (2.2297099s)

                                                
                                                
-- stdout --
	multinode-718300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-718300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-718300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:14:46.006988    8628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr: exit status 7 (2.1823414s)

                                                
                                                
-- stdout --
	multinode-718300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-718300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-718300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:14:48.225236    7616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 11:14:48.305771    7616 out.go:291] Setting OutFile to fd 812 ...
	I0226 11:14:48.306764    7616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:14:48.306764    7616 out.go:304] Setting ErrFile to fd 780...
	I0226 11:14:48.306764    7616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:14:48.319563    7616 out.go:298] Setting JSON to false
	I0226 11:14:48.319563    7616 mustload.go:65] Loading cluster: multinode-718300
	I0226 11:14:48.319563    7616 notify.go:220] Checking for updates...
	I0226 11:14:48.320570    7616 config.go:182] Loaded profile config "multinode-718300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:14:48.320570    7616 status.go:255] checking status of multinode-718300 ...
	I0226 11:14:48.341087    7616 cli_runner.go:164] Run: docker container inspect multinode-718300 --format={{.State.Status}}
	I0226 11:14:48.521640    7616 status.go:330] multinode-718300 host status = "Running" (err=<nil>)
	I0226 11:14:48.521640    7616 host.go:66] Checking if "multinode-718300" exists ...
	I0226 11:14:48.531707    7616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-718300
	I0226 11:14:48.692553    7616 host.go:66] Checking if "multinode-718300" exists ...
	I0226 11:14:48.707348    7616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:14:48.716107    7616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-718300
	I0226 11:14:48.879532    7616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52394 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-718300\id_rsa Username:docker}
	I0226 11:14:49.021356    7616 ssh_runner.go:195] Run: systemctl --version
	I0226 11:14:49.043693    7616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:14:49.075571    7616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-718300
	I0226 11:14:49.253013    7616 kubeconfig.go:92] found "multinode-718300" server: "https://127.0.0.1:52398"
	I0226 11:14:49.253127    7616 api_server.go:166] Checking apiserver status ...
	I0226 11:14:49.267054    7616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:14:49.305580    7616 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2444/cgroup
	I0226 11:14:49.329023    7616 api_server.go:182] apiserver freezer: "21:freezer:/docker/3a91e69913fdb34e60403682cc215e5cfe72bec2ee8d667c5d98e466a3aad054/kubepods/burstable/pod885dc812751052b9ee9554a1dfa00ee7/6baa5e36865204c0ce516b1ae5bd1230c280b5088ad377a63f60ec14c5ced5bc"
	I0226 11:14:49.343080    7616 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3a91e69913fdb34e60403682cc215e5cfe72bec2ee8d667c5d98e466a3aad054/kubepods/burstable/pod885dc812751052b9ee9554a1dfa00ee7/6baa5e36865204c0ce516b1ae5bd1230c280b5088ad377a63f60ec14c5ced5bc/freezer.state
	I0226 11:14:49.365734    7616 api_server.go:204] freezer state: "THAWED"
	I0226 11:14:49.365765    7616 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52398/healthz ...
	I0226 11:14:49.376682    7616 api_server.go:279] https://127.0.0.1:52398/healthz returned 200:
	ok
	I0226 11:14:49.377455    7616 status.go:421] multinode-718300 apiserver status = Running (err=<nil>)
	I0226 11:14:49.377505    7616 status.go:257] multinode-718300 status: &{Name:multinode-718300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0226 11:14:49.377557    7616 status.go:255] checking status of multinode-718300-m02 ...
	I0226 11:14:49.393827    7616 cli_runner.go:164] Run: docker container inspect multinode-718300-m02 --format={{.State.Status}}
	I0226 11:14:49.547050    7616 status.go:330] multinode-718300-m02 host status = "Running" (err=<nil>)
	I0226 11:14:49.547103    7616 host.go:66] Checking if "multinode-718300-m02" exists ...
	I0226 11:14:49.557230    7616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-718300-m02
	I0226 11:14:49.734758    7616 host.go:66] Checking if "multinode-718300-m02" exists ...
	I0226 11:14:49.746756    7616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:14:49.755754    7616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-718300-m02
	I0226 11:14:49.923293    7616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52444 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-718300-m02\id_rsa Username:docker}
	I0226 11:14:50.068661    7616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:14:50.091452    7616 status.go:257] multinode-718300-m02 status: &{Name:multinode-718300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0226 11:14:50.091452    7616 status.go:255] checking status of multinode-718300-m03 ...
	I0226 11:14:50.110520    7616 cli_runner.go:164] Run: docker container inspect multinode-718300-m03 --format={{.State.Status}}
	I0226 11:14:50.282339    7616 status.go:330] multinode-718300-m03 host status = "Stopped" (err=<nil>)
	I0226 11:14:50.282339    7616 status.go:343] host is not running, skipping remaining checks
	I0226 11:14:50.282339    7616 status.go:257] multinode-718300-m03 status: &{Name:multinode-718300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (6.53s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (22.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 node start m03 --alsologtostderr: (19.7213173s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 status: (2.727768s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (22.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (149.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-718300
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-718300
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-718300: (25.7647202s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-718300 --wait=true -v=8 --alsologtostderr
E0226 11:16:45.612247   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-718300 --wait=true -v=8 --alsologtostderr: (2m3.7480737s)
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-718300
--- PASS: TestMultiNode/serial/RestartKeepsNodes (149.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (11.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 node delete m03: (8.6259663s)
multinode_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr
multinode_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr: (2.0182951s)
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (11.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 stop
multinode_test.go:342: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 stop: (24.0507901s)
multinode_test.go:348: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-718300 status: exit status 7 (544.2973ms)

                                                
                                                
-- stdout --
	multinode-718300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-718300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:18:18.601306    9712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr: exit status 7 (588.6592ms)

                                                
                                                
-- stdout --
	multinode-718300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-718300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:18:19.151633    9752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0226 11:18:19.220507    9752 out.go:291] Setting OutFile to fd 748 ...
	I0226 11:18:19.222533    9752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:18:19.222533    9752 out.go:304] Setting ErrFile to fd 668...
	I0226 11:18:19.222661    9752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:18:19.234901    9752 out.go:298] Setting JSON to false
	I0226 11:18:19.234901    9752 mustload.go:65] Loading cluster: multinode-718300
	I0226 11:18:19.234901    9752 notify.go:220] Checking for updates...
	I0226 11:18:19.235591    9752 config.go:182] Loaded profile config "multinode-718300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 11:18:19.236129    9752 status.go:255] checking status of multinode-718300 ...
	I0226 11:18:19.259352    9752 cli_runner.go:164] Run: docker container inspect multinode-718300 --format={{.State.Status}}
	I0226 11:18:19.431340    9752 status.go:330] multinode-718300 host status = "Stopped" (err=<nil>)
	I0226 11:18:19.431340    9752 status.go:343] host is not running, skipping remaining checks
	I0226 11:18:19.431340    9752 status.go:257] multinode-718300 status: &{Name:multinode-718300 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0226 11:18:19.431340    9752 status.go:255] checking status of multinode-718300-m02 ...
	I0226 11:18:19.448185    9752 cli_runner.go:164] Run: docker container inspect multinode-718300-m02 --format={{.State.Status}}
	I0226 11:18:19.616094    9752 status.go:330] multinode-718300-m02 host status = "Stopped" (err=<nil>)
	I0226 11:18:19.616094    9752 status.go:343] host is not running, skipping remaining checks
	I0226 11:18:19.616094    9752 status.go:257] multinode-718300-m02 status: &{Name:multinode-718300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-718300 --wait=true -v=8 --alsologtostderr --driver=docker
E0226 11:18:55.850766   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-718300 --wait=true -v=8 --alsologtostderr --driver=docker: (1m15.9136775s)
multinode_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr
multinode_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-718300 status --alsologtostderr: (1.9950695s)
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (71.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-718300
multinode_test.go:480: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-718300-m02 --driver=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-718300-m02 --driver=docker: exit status 14 (271.8164ms)

                                                
                                                
-- stdout --
	* [multinode-718300-m02] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:19:38.556154    3020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Profile name 'multinode-718300-m02' is duplicated with machine name 'multinode-718300-m02' in profile 'multinode-718300'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-718300-m03 --driver=docker
multinode_test.go:488: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-718300-m03 --driver=docker: (1m4.6647974s)
multinode_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-718300
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-718300: exit status 80 (1.1451179s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-718300
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:20:43.490404   13432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-718300-m03 already exists in multinode-718300-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_f30df829a49c27e09829ed66f8254940e71c1eac_13.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-718300-m03
multinode_test.go:500: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-718300-m03: (5.3526057s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (71.66s)

                                                
                                    
x
+
TestPreload (183.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-169100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E0226 11:21:45.606185   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-169100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (1m50.4327614s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-169100 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-169100 image pull gcr.io/k8s-minikube/busybox: (1.9838276s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-169100
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-169100: (12.534515s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-169100 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
E0226 11:23:55.857911   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-169100 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (52.7524901s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-169100 image list
helpers_test.go:175: Cleaning up "test-preload-169100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-169100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-169100: (5.1050894s)
--- PASS: TestPreload (183.63s)

                                                
                                    
x
+
TestScheduledStopWindows (138.07s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-909300 --memory=2048 --driver=docker
E0226 11:24:48.784816   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-909300 --memory=2048 --driver=docker: (1m7.9735176s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-909300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-909300 --schedule 5m: (1.3468442s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-909300 -n scheduled-stop-909300
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-909300 -n scheduled-stop-909300: (1.314239s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-909300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-909300 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.148591s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-909300 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-909300 --schedule 5s: (1.2921336s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-909300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-909300: exit status 7 (399.6983ms)

                                                
                                                
-- stdout --
	scheduled-stop-909300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:26:16.891782    4636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-909300 -n scheduled-stop-909300
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-909300 -n scheduled-stop-909300: exit status 7 (391.6512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:26:17.298436    3384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-909300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-909300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-909300: (4.1908168s)
--- PASS: TestScheduledStopWindows (138.07s)

                                                
                                    
x
+
TestInsufficientStorage (49.34s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-442800 --memory=2048 --output=json --wait=true --driver=docker
E0226 11:26:45.612091   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-442800 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (42.7546278s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fabb5817-de8e-415a-83bd-08864502c9fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-442800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0473e4d6-9ba7-4cd3-a0fa-dece387a70f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"c396cfcd-a579-4b3e-8092-10a824b01702","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f4839ac-d6f0-455c-be49-bdedf14e0539","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"a11d7615-5bdb-4454-b28a-9b6819b6719f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18222"}}
	{"specversion":"1.0","id":"ac92d8aa-7708-4e99-96f3-9f2d94be2e94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4eea42ce-ecbc-41c6-b834-0e39154b9a61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a83381a0-f794-4163-bd5e-6637470c1d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a1febd36-475c-4e4f-885a-eb5242f2fa22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b683b29-6643-43ec-8005-a9d5fd115f25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"776fda63-59fd-46db-ba93-77721e77bd7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-442800 in cluster insufficient-storage-442800","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"412f429d-2f5c-4564-9efa-db34dc08f21a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708008208-17936 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d138942b-53fe-4464-bcb8-0783a8a061e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd52c9a3-4bb7-44f8-9023-193502f9e7f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:26:21.886704   11256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-442800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-442800 --output=json --layout=cluster: exit status 7 (1.1725279s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-442800","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-442800","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:27:04.636004    8888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0226 11:27:05.654171    8888 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-442800" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-442800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-442800 --output=json --layout=cluster: exit status 7 (1.1893647s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-442800","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-442800","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:27:05.817511   13708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0226 11:27:06.839504   13708 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-442800" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	E0226 11:27:06.873277   13708 status.go:559] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\insufficient-storage-442800\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-442800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-442800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-442800: (4.2195959s)
--- PASS: TestInsufficientStorage (49.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (190.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.2612746129.exe start -p running-upgrade-062000 --memory=2200 --vm-driver=docker
E0226 11:31:45.619851   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.2612746129.exe start -p running-upgrade-062000 --memory=2200 --vm-driver=docker: (1m28.7134937s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-062000 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-062000 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m35.4050661s)
helpers_test.go:175: Cleaning up "running-upgrade-062000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-062000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-062000: (6.0528177s)
--- PASS: TestRunningBinaryUpgrade (190.71s)

                                                
                                    
x
+
TestMissingContainerUpgrade (364.45s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.2852027617.exe start -p missing-upgrade-797800 --memory=2200 --driver=docker
E0226 11:28:39.084977   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 11:28:55.848704   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.2852027617.exe start -p missing-upgrade-797800 --memory=2200 --driver=docker: (3m23.155174s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-797800
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-797800: (11.3625408s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-797800
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-797800 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-797800 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m21.8430405s)
helpers_test.go:175: Cleaning up "missing-upgrade-797800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-797800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-797800: (6.9424525s)
--- PASS: TestMissingContainerUpgrade (364.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (399.8889ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-644400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:27:11.253829    1972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (166.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --driver=docker: (2m45.3354039s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-644400 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-644400 status -o json: (1.4379083s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (166.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --no-kubernetes --driver=docker: (20.5931516s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-644400 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-644400 status -o json: exit status 2 (1.2763661s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-644400","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:30:19.003014    6636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-644400
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-644400: (4.716672s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --no-kubernetes --driver=docker: (23.9600728s)
--- PASS: TestNoKubernetes/serial/Start (23.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (172.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3624545338.exe start -p stopped-upgrade-369800 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3624545338.exe start -p stopped-upgrade-369800 --memory=2200 --vm-driver=docker: (1m21.5889711s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3624545338.exe -p stopped-upgrade-369800 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3624545338.exe -p stopped-upgrade-369800 stop: (14.3608176s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-369800 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-369800 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m16.8424527s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (172.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-644400 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-644400 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.197127s)

                                                
                                                
** stderr ** 
	W0226 11:30:48.952392    8404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.3093494s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.189269s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (8.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-644400
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-644400: (8.0492476s)
--- PASS: TestNoKubernetes/serial/Stop (8.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (13.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-644400 --driver=docker: (13.1981434s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (13.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-644400 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-644400 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.2630381s)

                                                
                                                
** stderr ** 
	W0226 11:31:15.918111   10852 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (103s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-268400 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-268400 --memory=2048 --install-addons=false --wait=all --driver=docker: (1m43.0015192s)
--- PASS: TestPause/serial/Start (103.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (6.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-369800
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-369800: (6.5749905s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (6.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-268400 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-268400 --alsologtostderr -v=1 --driver=docker: (49.3770172s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (49.41s)

                                                
                                    
x
+
TestPause/serial/Pause (2s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-268400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-268400 --alsologtostderr -v=5: (1.9966558s)
--- PASS: TestPause/serial/Pause (2.00s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-268400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-268400 --output=json --layout=cluster: exit status 2 (1.3796283s)

                                                
                                                
-- stdout --
	{"Name":"pause-268400","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-268400","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:35:50.101017    8340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (1.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-268400 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-268400 --alsologtostderr -v=5: (1.7789575s)
--- PASS: TestPause/serial/Unpause (1.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.18s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-268400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-268400 --alsologtostderr -v=5: (2.1817374s)
--- PASS: TestPause/serial/PauseAgain (2.18s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (6.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-268400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-268400 --alsologtostderr -v=5: (6.1249908s)
--- PASS: TestPause/serial/DeletePaused (6.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.1754042s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-268400
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-268400: exit status 1 (195.9994ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-268400: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (129.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-279800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-279800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.29.0-rc.2: (2m9.4331543s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (129.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-279800 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6df0c2c8-52be-48db-97b9-6bd7fe5f0851] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6df0c2c8-52be-48db-97b9-6bd7fe5f0851] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0176661s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-279800 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-279800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-279800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.3953928s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-279800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-279800 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-279800 --alsologtostderr -v=3: (12.8352305s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-279800 -n no-preload-279800
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-279800 -n no-preload-279800: exit status 7 (462.8968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:40:08.146436   11164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-279800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (400.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-279800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-279800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.29.0-rc.2: (6m39.2047039s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-279800 -n no-preload-279800
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-279800 -n no-preload-279800: (1.486755s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (400.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-755000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-755000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.4: (1m32.5173843s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-336100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.4
E0226 11:41:28.795972   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 11:41:45.617387   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-336100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.4: (1m40.0261541s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-755000 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f9512d34-746b-4c83-b82a-a644a34e8fa4] Pending
helpers_test.go:344: "busybox" [f9512d34-746b-4c83-b82a-a644a34e8fa4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f9512d34-746b-4c83-b82a-a644a34e8fa4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0224913s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-755000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-755000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-755000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.2842042s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-755000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-755000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-755000 --alsologtostderr -v=3: (12.7124189s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-336100 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cac5f0b0-b9de-4b86-a1c7-5ec2ea5930db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cac5f0b0-b9de-4b86-a1c7-5ec2ea5930db] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.0130795s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-336100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-755000 -n embed-certs-755000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-755000 -n embed-certs-755000: exit status 7 (413.899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:42:23.176565    7220 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-755000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (347.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-755000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-755000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.4: (5m45.6107206s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-755000 -n embed-certs-755000
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-755000 -n embed-certs-755000: (1.4967812s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (347.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-336100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-336100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.4029451s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-336100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-336100 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-336100 --alsologtostderr -v=3: (12.6805995s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100: exit status 7 (444.5681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:42:40.323898    6820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-336100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-336100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.4
E0226 11:43:55.866190   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 11:45:19.095999   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 11:46:45.620970   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-336100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.4: (5m52.2520436s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-336100 -n default-k8s-diff-port-336100: (1.7904429s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (39.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gfbd8" [22d252e4-3b3b-43bf-b939-b1909c19020a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gfbd8" [22d252e4-3b3b-43bf-b939-b1909c19020a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 39.0228467s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (39.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gfbd8" [22d252e4-3b3b-43bf-b939-b1909c19020a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0156121s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-279800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-279800 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-279800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-279800 --alsologtostderr -v=1: (1.8021997s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-279800 -n no-preload-279800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-279800 -n no-preload-279800: exit status 2 (1.3488674s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:47:37.161395   11244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-279800 -n no-preload-279800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-279800 -n no-preload-279800: exit status 2 (1.3933376s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:47:38.514300    9800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-279800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-279800 --alsologtostderr -v=1: (1.7159398s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-279800 -n no-preload-279800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-279800 -n no-preload-279800: (1.724198s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-279800 -n no-preload-279800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-279800 -n no-preload-279800: (1.4711506s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (9.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (97.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-571300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-571300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.29.0-rc.2: (1m37.7644499s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (97.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (26.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kzdzg" [87d989fa-8987-471b-9a33-b295f98021f2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kzdzg" [87d989fa-8987-471b-9a33-b295f98021f2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 26.0328389s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (26.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (28.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ws57n" [337e7055-a8d1-456b-9079-38f2fcdab1d6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ws57n" [337e7055-a8d1-456b-9079-38f2fcdab1d6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 28.018021s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (28.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-321200 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-321200 --alsologtostderr -v=3: (3.8618405s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kzdzg" [87d989fa-8987-471b-9a33-b295f98021f2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0307561s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-755000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-321200 -n old-k8s-version-321200: exit status 7 (530.8496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:48:40.775318    8640 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-321200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-755000 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p embed-certs-755000 image list --format=json: (1.2260025s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-755000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-755000 --alsologtostderr -v=1: (3.0605519s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-755000 -n embed-certs-755000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-755000 -n embed-certs-755000: exit status 2 (1.5061848s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:48:48.342755    4384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-755000 -n embed-certs-755000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-755000 -n embed-certs-755000: exit status 2 (1.4246497s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:48:49.829917    3028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-755000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-755000 --alsologtostderr -v=1: (1.8240944s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-755000 -n embed-certs-755000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-755000 -n embed-certs-755000: (1.7022343s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-755000 -n embed-certs-755000
E0226 11:48:55.856733   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-755000 -n embed-certs-755000: (1.7513784s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ws57n" [337e7055-a8d1-456b-9079-38f2fcdab1d6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0218486s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-336100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (104.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m44.3932593s)
--- PASS: TestNetworkPlugins/group/auto/Start (104.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-336100 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-336100 image list --format=json: (1.1593996s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-571300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-571300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.1611056s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-571300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-571300 --alsologtostderr -v=3: (9.2377336s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-571300 -n newest-cni-571300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-571300 -n newest-cni-571300: exit status 7 (490.546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:49:43.147105    8112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-571300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0226 11:49:43.538655   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (54.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-571300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.29.0-rc.2
E0226 11:49:44.820738   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:49:47.385476   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-571300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.29.0-rc.2: (52.8972314s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-571300 -n newest-cni-571300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-571300 -n newest-cni-571300: (1.3662213s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (54.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (111.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
E0226 11:50:02.768961   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:50:23.256439   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m51.3033094s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (111.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-571300 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-571300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-571300 --alsologtostderr -v=1: (2.880633s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-571300 -n newest-cni-571300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-571300 -n newest-cni-571300: exit status 2 (1.4754193s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:50:42.393260    5260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-571300 -n newest-cni-571300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-571300 -n newest-cni-571300: exit status 2 (1.4793068s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0226 11:50:43.882388    7272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-571300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-571300 --alsologtostderr -v=1: (1.8687421s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-571300 -n newest-cni-571300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-571300 -n newest-cni-571300: (1.9569378s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-571300 -n newest-cni-571300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-571300 -n newest-cni-571300: (1.6464167s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-968100 "pgrep -a kubelet": (1.6536276s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (19.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dtzcs" [e860b517-9fe4-4ae2-9423-b1ade618a632] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dtzcs" [e860b517-9fe4-4ae2-9423-b1ade618a632] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 19.0082787s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (19.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (193.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
E0226 11:51:04.217926   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (3m13.4052671s)
--- PASS: TestNetworkPlugins/group/calico/Start (193.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7qs9c" [8c18ee70-f99a-4eef-a2a5-133184ba3783] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0243768s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-968100 "pgrep -a kubelet": (1.3119429s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (18.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-59496" [aaf1273a-8b24-47e7-abc4-1b015f4e3c17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-59496" [aaf1273a-8b24-47e7-abc4-1b015f4e3c17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 18.0161073s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (18.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0226 11:52:16.086190   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (116.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E0226 11:52:18.654152   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
E0226 11:52:23.776083   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
E0226 11:52:26.151604   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
E0226 11:52:34.028687   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
E0226 11:52:54.516069   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m56.1914245s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (116.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (87.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
E0226 11:53:35.821039   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
E0226 11:53:55.861846   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m27.2664081s)
--- PASS: TestNetworkPlugins/group/false/Start (87.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xpfzz" [bfaa04db-c2ab-4cac-8143-e7e736738c1a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0200292s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-968100 "pgrep -a kubelet": (1.269779s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (18.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g9455" [fdb66d07-db4b-4797-93b2-f7a88d63ba14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g9455" [fdb66d07-db4b-4797-93b2-f7a88d63ba14] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 18.0163593s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (18.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-968100 "pgrep -a kubelet": (1.9211083s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (19.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bgmgz" [1b81600a-20ac-45bc-a023-d47419eef111] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bgmgz" [1b81600a-20ac-45bc-a023-d47419eef111] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 18.0091692s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (19.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-968100 "pgrep -a kubelet": (1.294658s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (17.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7f8hs" [74702119-308e-4b14-9e83-1cdd2b0f2196] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 11:54:57.750279   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-7f8hs" [74702119-308e-4b14-9e83-1cdd2b0f2196] Running
E0226 11:55:09.994934   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-279800\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 17.0231257s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (17.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (105.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m45.500679s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (105.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (113.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E0226 11:55:53.195636   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:53.210484   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:53.225998   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:53.257197   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:53.303999   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:53.396296   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:53.569144   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:53.898600   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:54.548904   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:55.829310   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:55:58.394541   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:56:03.518832   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m53.3946571s)
--- PASS: TestNetworkPlugins/group/flannel/Start (113.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E0226 11:56:34.247590   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
E0226 11:56:45.622224   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 11:56:49.466279   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:49.481488   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:49.497240   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:49.521756   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:49.574343   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:49.666679   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:49.840508   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:50.170970   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:50.820311   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:52.110680   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:54.671371   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:56:59.958281   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:57:10.206471   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 11:57:13.458420   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
E0226 11:57:15.218352   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-968100\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m44.4058402s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-968100 "pgrep -a kubelet": (1.2589305s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (17.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rmp7p" [4fae35cd-afca-40ce-8313-d4dda77bab71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 11:57:30.693923   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-rmp7p" [4fae35cd-afca-40ce-8313-d4dda77bab71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 17.0178852s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (17.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xkb9p" [a021df10-4dbc-4642-9d2c-1fd0d355ffce] Running
E0226 11:57:41.591869   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-336100\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0139933s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-968100 "pgrep -a kubelet": (1.3239624s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (17.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bz9pb" [3d1a30e7-affc-4781-bc6f-580ba5f89986] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bz9pb" [3d1a30e7-affc-4781-bc6f-580ba5f89986] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 17.0214018s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (17.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-968100 "pgrep -a kubelet": (1.2418723s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (20.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7csts" [9cc88609-d0bf-4985-b4dc-4f54b6091c21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7csts" [9cc88609-d0bf-4985-b4dc-4f54b6091c21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 20.0100378s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (20.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (96.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E0226 11:58:55.864017   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-968100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m36.5164972s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (96.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-968100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-968100 "pgrep -a kubelet": (1.1484236s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (17.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-968100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2tgc9" [9eea6cd0-4bfe-40b3-ae20-efb90c17b44e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 12:00:34.810038   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-968100\client.crt: The system cannot find the path specified.
E0226 12:00:36.046467   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-968100\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-2tgc9" [9eea6cd0-4bfe-40b3-ae20-efb90c17b44e] Running
E0226 12:00:38.681333   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-968100\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 17.0206889s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (17.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-968100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-968100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.28s)
E0226 12:01:45.627986   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-366900\client.crt: The system cannot find the path specified.
E0226 12:01:49.479629   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-968100\client.crt: The system cannot find the path specified.
E0226 12:01:56.731154   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-968100\client.crt: The system cannot find the path specified.
E0226 12:01:59.106646   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-559100\client.crt: The system cannot find the path specified.
E0226 12:02:00.603709   11868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-968100\client.crt: The system cannot find the path specified.

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 22.8268ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6rk4c" [a7b07fe0-3f78-4374-bcf9-69a93d083bc4] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0218825s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tmsl5" [c720dcd1-b0f9-41fb-a227-33564148f268] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0426472s
addons_test.go:340: (dbg) Run:  kubectl --context addons-559100 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-559100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-559100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.8607417s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-559100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-559100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-559100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1f81b7f7-d59c-4511-8190-078b1ffc5d07] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1f81b7f7-d59c-4511-8190-078b1ffc5d07] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 21.0601681s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-559100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-559100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.3341279s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-559100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0226 10:34:51.374865   11104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (24.06s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-366900 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-366900 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 9876: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-366900 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-366900 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-tmdsn" [ea7e4f2f-3f0e-4e3b-894f-66fd931d8c8e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-tmdsn" [ea7e4f2f-3f0e-4e3b-894f-66fd931d8c8e] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.0216337s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.40s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-591000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-591000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-591000: (1.1826061s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (15.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-968100 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W0226 11:34:35.779218   10748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W0226 11:34:36.070117    8680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W0226 11:34:36.373856    8072 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W0226 11:34:36.809551     936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W0226 11:34:37.076411   13820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W0226 11:34:38.562845    8328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W0226 11:34:39.033985   13960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W0226 11:34:39.320981    2544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W0226 11:34:39.615009   14212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W0226 11:34:39.926392    6828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-968100" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W0226 11:34:41.956134    4576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W0226 11:34:42.246195    9416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W0226 11:34:42.559791   11040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W0226 11:34:42.857765   12820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W0226 11:34:43.166226    7264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-968100

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W0226 11:34:43.744583    9552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W0226 11:34:44.026182    9752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W0226 11:34:44.289777   12024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W0226 11:34:44.550810   10880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W0226 11:34:44.822919   10744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W0226 11:34:45.070852    5864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W0226 11:34:45.347850    1932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W0226 11:34:45.638467    5268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W0226 11:34:45.908646    8496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W0226 11:34:46.200791    9896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W0226 11:34:46.459612   10444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W0226 11:34:46.733530    1980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W0226 11:34:46.977515   11684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W0226 11:34:47.257583    3008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W0226 11:34:47.518830    2552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W0226 11:34:47.805323    8888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W0226 11:34:48.083878   10716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                

                                                
                                                
>>> host: crio config:
W0226 11:34:48.358153    8500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-968100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-968100"

                                                
                                                
----------------------- debugLogs end: cilium-968100 [took: 14.3451891s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-968100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-968100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-968100: (1.2755175s)
--- SKIP: TestNetworkPlugins/group/cilium (15.62s)

                                                
                                    
Copied to clipboard